Rewrite Bun in Rust has been merged

(github.com)

503 points | by Chaoses 18 hours ago

112 comments

  • sesm 6 hours ago
    When announcements say that rewrite took 1 week, I wonder how much time went into preparing this file with very detailed instructions on mapping Zig to Rust idioms: https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd573...

    On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.

    This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.

    • jaccola 6 hours ago
      Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).

      Same since they own Bun, they have every incentive to make this seem easier than it was.

      • torben-friis 4 hours ago
        This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.
        • smrtinsert 2 hours ago
          Influencers are getting paid to promote ai for 10s of thousands of USD. This is one the reasons social media has been swamped with it lately.
        • ignoramous 3 hours ago
          > Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines

          Since one of LLM's largest market (with product fit) is us developers, we are experiencing what the crypto bros did to others.

      • fny 3 hours ago
        I'm not sure it matters what anyone claims. It's easy to use and experience its abilities and limitations.
    • sunrunner 6 hours ago
      Ignoring things like whether the Rust that was output could be deemed qualitatively good, whether the resulting line count is appropriate, how much the codebase was ready or primed for this kind of exercise going in, and so on, is it fair to say that a 622 line artefact created up front is a relatively small cost for a potential increase in consistency or quality of output when the output is ~1M LoC? It seems like there's a multiplicative power here given how much output there is. Or is that missing a lot of nuance?

      I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.

      • fdsajfkldsfklds 4 hours ago
        This is effectively a very expensive and resource-intensive machine translation. As such, there is no increase in consistency or quality of output.
        • antonvs 7 minutes ago
          How would you have achieved this “machine translation” without an LLM?

          It seems to me it would have been highly likely to be more expensive and more resource intensive - if realistically possible at all, short of implementing a general Zig to Rust translator first.

        • Aurornis 4 hours ago
          The translation is a starting point to enable follow-on work to take advantage of Rust's features.
      • cyanydeez 5 hours ago
        I would guess it was a for ... each loop. They likely wrote a bunch of skills. The foor loop went through each file and generated a complimentary file, then had another process integrate/validate.

        I doubt the entire process was a single week, just whatever harness they specially prepared for the work.

    • Aurornis 4 hours ago
      > using internal smart pointer types that map 1-to-1 to Rust equivalents

      Smart pointers weren't invented by Rust. If you write code in other languages with pointers you mentally model the same types already.

      > and `bun_collections` Rust crate already exists.

      This is wrong. It's part of the PR in the codebase. It did not previously exist.

    • ares623 4 hours ago
      It's the same thing with their gcc stunt.

      It would be _so_ easy to alleviate any doubt from this and hype up the IPO even more. They just need start a separate repo with all the hidden work they needed to do to prod the AI along, and let everyone replicate the results. After all, isn't that what all their customers are trying to achieve? A million lines of usable code in "7" days? Never mind the fact that it will also boost Anthropic's usage metrics as everyone tries to replicate it into their workflows.

      If it was beautiful, they would've started with a blog post about this with links and instructions. Perhaps I will still be proven wrong and a blog post is being written as I type this.

    • tln 5 hours ago
      Seems like Zig Bun had 3 pointer types that map neatly to existing Rust pointer types. The other 7-8 needed types to be created.

      Is that the conspiracy?

      bun_collections doesn't look much older than the porting guide.

  • vitaminCPP 8 hours ago
    > +1009257 -4024

    Bun is now over 1M lines of Rust code.

    This is approaching the size of the Rust compiler itself; except that BunJs is mostly a JavaScript interpreter wrapper + a reimplementation of the NodeJS library (Rust STD wrapper).

    I think BunJS is becoming the canary for software complexity management in the LLM era.

    • Jarred 5 hours ago
      > mostly a JavaScript interpreter wrapper

      Not accurate. Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code.

      • tln 5 hours ago
        Now that Bun can leverage Rust do you think some of this code will get disaggregated? Eg, Bun could use swc crates
        • ameliaquining 2 hours ago
          It wouldn't have been that hard to do that from Zig if they'd wanted to. They don't, because they want to do everything themselves so that it works exactly the way they want (except the core JS engine for which this is infeasible—though even that has custom patches). After all, there are already plenty of libraries on npm for those other parts of the stack and they do work in Bun.
    • mort96 6 hours ago
      Bun is not a JavaScript interpreter, it's "only" a reimplementation of the NodeJS library + various other libraries. Bun uses JavaScriptCore as its JS engine. So Bun itself does (or at least should do) no JavaScript parsing, interpreting or JITing.

      EDIT: I misread, sorry! You said "JavaScript interpreter wrapper", which is correct.

      • runjake 5 hours ago
        No, it does parsing and a bunch more. The Bun founder says it best in this comment:

        "Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code."

        https://news.ycombinator.com/item?id=48140921

      • easterncalculus 1 hour ago
        Bun is now almost twice the size of JavaScriptCore, too, by linecount after this.

        This is the 'world class' engineering that Jarred claims he can't hire Americans to do, by the way https://x.com/jarredsumner/status/1969751721737077247. This company is parasitic to its literal (javascript) core.

      • iainmerrick 5 hours ago
        That’s what they said - “JavaScript interpreter wrapper”.
        • mort96 5 hours ago
          You're right, sorry! I completely missed the word "wrapper" somehow.
    • sunrunner 7 hours ago
      I'm not sure if it's just the leading '+' or if there are other factors for phone number detection on iOS, but on mobile the line count changes are underlined and I can tap it to start a call, which, if it is because of the diff size, is something I find pretty amusing.
      • GeekyBear 6 hours ago
        Apple has had a feature called Apple Data Detectors since the 90's that looks for different patterns in text and allows you to perform actions on them.

        So if the text includes a phone number, email address, flight number, package tracking number, street address or other pattern in the data it is underlined and allows you to perform one or more actions.

        The patterns it looks for and actions it takes are extensible by developers.

        If you don't care for it, you can turn it off.

      • solid_fuel 6 hours ago
        > +1009257 -4024

            +1 (009) 257-4024
        
        
        I think it just lines up with the typical size of a phone number and the '-' is interpreted as a separator. Just a simple regex probably.
      • nesarkvechnep 7 hours ago
        Maybe it's the phone number of the vibe coding police?
      • layer8 6 hours ago
        The leading “+” is not needed. Numbers with seven digits are automatically hyperlinked (possibly depends on locale).

        123456

        1234567

        12345678

        • sunrunner 6 hours ago
          Interestingly, the entire line gets formatted once it reaches seven digits, +lines and -lines both, so I guess the -lines is just interpreted as a dash. But your eight digit string doesn't. Perhaps it's not interesting, though I've never really given it a second thought before.
          • layer8 5 hours ago
            There’s certainly some regex or similar involved that tries to recognize phone numbers, and then hyperlinks the whole thing. My point was that it’s not solely the plus sign that is triggering it.
    • Aurornis 7 hours ago
      The Bun codebase had a similar number of lines of code before the rewrite.

      There's nothing unusual about a rewrite coming in with a similar LOC number.

      • johnnypangs 6 hours ago
        I think the unusual thing is that it was written in a week. I highly doubt that they read and understood all 1M lines. But if it works and people use it, what does that mean for software? Should we still care about the code that’s written? Should we even look? I’ve always thought so, but maybe I’m just biased.
        • keithnz 1 hour ago
          I think we should care way more about what the validation story is of code. The obvious question does it all work? I'm happy to not look at any code if we have good ways to validate what is there. The other thing I care about is the architectural structure of the code. Given its a port I don't think that would have changed.
      • onlyrealcuzzo 6 hours ago
        I was going to comment this same thing.

        I don't know enough about what Bun does... But Rust is so insanely complicated, it's hard for me to wrap my head around how Bun is equally complictated.

        • allthetime 1 hour ago
          They are complicated in different ways. The rust compiler doesn’t include redis, Postgres, and S3 clients for instance.
        • gpm 3 hours ago
          Complicated things can often be expressed very succinctly - the hard part is in understanding why the short program does what it is supposed to.

          Simple things often take a lot of space, simply because there's a lot of similar but different simple things that each need to be written down.

          Lines of code just isn't a good measure of "complicated".

      • asdfasgasdgasdg 7 hours ago
        If anything, it's a little surprising that the Rust code isn't significantly larger because I tend to think of Rust as requiring somewhat more boilerplate than JS.
        • SkiFire13 7 hours ago
          The code was using Zig before, not JS.
          • asdfasgasdgasdg 6 hours ago
            Ah fair point. I don't have a sense of which of those are more verbose.
            • turkeyboi 1 hour ago
              Zig is, typically. And yet here, the rust rewrite is around 60% more lines of code.
        • vinnymac 6 hours ago
          Not to mention how trigger happy LLMs can be when it comes to being overly verbose and adding unnecessary bits even with explicit direction not to do so.
    • ivanjermakov 5 hours ago
      1MLOC for a JavaScriptCore wrapper is a great example of what agents are capable of.
      • nicce 4 hours ago
        Code is cheap. Only the quality and maintenance is interesting. Those will be seen later on.
    • giancarlostoro 5 hours ago
      I would not be surprised if the next major step for them is to audit the code and trim the fat.
    • embedding-shape 7 hours ago
      > I think BunJS is becoming the canary for software complexity management in the LLM era.

      Yeah, Cursor did the same thing, bragging about how many lines of code they managed to produce for a semi-working browser, completely missing the idea where less code is better, not the other way around.

      • anuramat 7 hours ago
        I think their point was that the project is complex, with the implicit assumption that the complexity is to a large degree inherent.

        Even if it's mostly accidental, and the code is overengineered slop (which it is), the system being able to decompose a problem and deliver something is impressive in terms of stability: it wasn't sucked into rewriting everything from scratch every time it would run into issues, it didn't have infinite subagent recursion with a one-agent-per-line type workflow, etc.

    • claudiug 4 hours ago
      you can easy fix this by MAKE NO MISTAKES, DO NOT HALLUCINATE under your zig2rust.md skill agent flow /s
  • gm678 8 hours ago

        $ rg 'unsafe [{]' src/ | wc -l
        10428
        $ rg 'unsafe [{]' src/ -l | wc -l
        736
        
        Language        Files     Lines      Code  Comments    Blanks
        ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
        Rust             1443    929213    732281    116293     80639
        Zig              1298    711112    574563     59118     77431
        TypeScript       2604    654684    510464     82254     61966
        JavaScript       4370    364928    293211     36108     35609
        C                 111    305123    205875     79077     20171
        C++               586    262475    217111     19004     26360
        C Header          779    100979     57715     29459     13805
    • petcat 7 hours ago
      Cool you can just search specifically for potentially unsafe code in Rust. How do you search for unsafe code in Zig? Or do you just have to assume it's everywhere?
      • sobellian 6 hours ago
        If half of your code is unsafe then unless you exercise tremendous discipline (Claude basically doesn't) you will just end up with a big ball of unsafe, peppered with hallucinations in whatever random documentary comments Claude decided to make. I doubt they enforced the confinement of unsafe to a specific architectural layer or anything like that.
        • brandly 1 hour ago
          Aren't the Rust unsafes a reflection of the Zig it was ported from? However now that you're working with Rust, you're in a position to continue improving and eliminating the unsafes.
      • dnautics 6 hours ago
        In principle static analysis is possible. (Note: WIP)

        https://github.com/ityonemo/clr

      • VWWHFSfQ 7 hours ago
        [dead]
      • Barrin92 6 hours ago
        if half of your files in a million line codebase are unsafe that doesn't tell you much any more. Presumably the point of a Rust rewrite is that you actually make use of Rust's safety features in a coherent way.

        But given the whole "let AI rewrite this for me" stunt nature of this project that was not going to happen because that would require well, actual thinking and a re-design. So now you have Zig disguised as Rust and a line-by-line port because the semantics of idiomatic Rust don't map on the semantics of Zig.

        • famouswaffles 6 hours ago
          >if half of your files in a million line codebase are unsafe that doesn't tell you much any more.

          If half of your files in the first pass of a million line rewrite are unsafe then that's completely fine. Do you understand what the tag actually is? It doesn't even mean that the code is actually unsafe, just that the compiler can't guarantee its safety, which can happen for a number of reasons, some benign.

          Who rewrites a 700K codebase trying to be idiomatic from the get go ? That's setting yourself up for failure, whether you're a human or a machine.

        • Daishiman 6 hours ago
          And? This is absolutely the correct and standardized way to do mechanical rewrites: you do a rewrite that maps directly to the original source so you can rely on the original correctness guarantees and bug-for-bug compatibility and log issues, and then you go into the next phase where you begin to use idiomatic constructs.

          This is the same in COBOL-to-Java ports that have been done in banking and insurance for the past 20 years.

          • Barrin92 4 hours ago
            >This is the same in COBOL-to-Java ports

            it isn't, because those guys didn't think a naive 1-1 machine translation would give them the benefits of Java, which somehow the people involved in this rust rewriting seem to think they've already gained despite the virtually identical code.

            If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly

            • Daishiman 3 hours ago
              > If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly

              If it were just a marketing stunt you wouldn't have a fraction of a percent of the test suite passing with the remaining bugs being realistically very fixable, and everything written in languages with type systems that give far more guarantees than what COBOL is possible.

              You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected, and maps with many people's experience with using LLMs for tasks like these.

              • Barrin92 1 hour ago
                >You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected

                no I'm being negative because as I just said, if you want to do a purely syntactic translation you don't even need an LLM, that's called transpilation and we've been doing it programmatically for decades.

                This is the kind of thing that looks great to people who can't program, think this is some new superpower unlocked by the mystery magic of LLMs and that is exactly the kind of impression Claude wants to sell.

    • ordu 7 hours ago
      The half of the files contain 'unsafe' keyword? It doesn't seem as a good rewrite. What is the point of rewrite into Rust, if ~half of your code is still unsafe?
      • fbernier 7 hours ago
        Bun is fundamentally a boundary-heavy system and it also rolls its own version of a lot of things that people typically use via libraries, where unsafe is hidden. (no async, memory arenas, etc). It also uses FFI heavily which requires unsafe.

        It also looks like the top 2 maintainers are currently actively working on getting the amount of unsafe down and it's going down quickly.

      • _aavaa_ 7 hours ago
        1. Rewrite from zig to rust in as close to zig as you can.

        2. Turn into idiomatic rust.

        • shimman 6 hours ago
          1. Get hired into a company where you have a solid bet on making multi-century lasting generational wealth (>$50,000,000).

          2. Every waking moment do everything in your power to boost the company that might give you the ability to define the direction of technology for the rest of your life.

          3. Use the only thing you have (bun) to help push you in this direction and do things to help boost LLM marketing (a technology that already deeply struggles to find customers and has to rely on welfare (lucrative government contracts) to make sales).

          ---

          Honestly think this generation of tech workers in SF are more evil than those that worked at Google + Facebook in the early 10s.

          • oskarkk 2 hours ago
            > a technology that already deeply struggles to find customers

            As far as I know it's the opposite, Anthropic struggles to satisfy demand, they have tons of paying customers and their customer base is growing fast.

          • _aavaa_ 3 hours ago
            What does that have to do with rewriting from zig to rust??? This thread is what's pushing LLM marketing, not the rewrite itself.

            If the rewrite is just a stunt and it will crash and burn it will do that whether we spend our free (or work) time writing comments. If there is any hype around this particular topic, it's happening here not in the GitHub repo.

          • ukblewis 5 hours ago
            I’m honestly confused. What is it that you think makes these workers “more evil” than Google and Facebook workers from the early 2010s?
            • wiseowise 4 hours ago
              Google and Facebook workers just made a lot of cash and mostly made everyone's life harder by Leetcode and bad interview process, they didn't threaten and actively work to put millions of SE on the street.
              • tredre3 4 hours ago
                > they didn't threaten and actively work to put millions of SE on the street

                Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.

                They (we) did it to tons of other industries. And we collectively patted ourselves on the back, saying that automation is a good thing and we're the good guys for doing it and people who lost their jobs will adapt and maybe they should just learn to code.

                Now it's happening to (some of) us and suddenly it's evil?

                No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.

                We either don't think about it ("what could go wrong?"), don't care about it (eh), justify it ("I need to eat!!!", "I'm just following orders"), or actively embrace it ("It's the future!").

                • pessimizer 4 hours ago
                  > Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.

                  Nah. The fact that such opportunity wasn't available attracted a different sort of person.

                • wiseowise 3 hours ago
                  What is it with tech bros and ridiculous asocial agenda? You have some guilt complex or whatever shit?

                  > No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.

                  What the fuck are you even saying?

            • zzzoom 4 hours ago
              And definitely not more evil than the workers at current Meta.
      • Aurornis 7 hours ago
        unsafe just means that you take responsibility for the safety of the code contained within. Calling into non-Rust libraries has to be wrapped in unsafe. Making syscalls has to be wrapped in unsafe.

        Bun needs to interact with FFI code. This gets wrapped in unsafe blocks.

        There are many places where a JavaScript interpreter and library would need to make unsafe calls and operations.

        It doesn't literally mean the code is unsafe. It means the code contained within is not something that can be checked by the compiler, so the writer takes responsibility for it.

        There are many low-level data munging and other benign operations that a human can demonstrate are safe, but need to be wrapped in safe because they do things outside of what the compiler can check.

        • 12_throw_away 6 hours ago
          There's actually a good example of this in the rewrite [1], in `PathString::slice`. They are doing an unsafe operation to return a slice that could be a use-after-free, if the caller had not already guaranteed that an invariant will remain true. Following proper rust idiomatic practices, claude has added a SAFETY comment to the unsafe block to explain why it's safe: "caller guarantees the borrowed memory outlives this".

          Now, normally, you'd communicate this contract to your API users by marking the type's constructor (PathString::init) as "unsafe", and including the contract in its documentation. Unfortunately in this case, this invariant does not exist - it appears to have been fabricated out of thin air by the LLM [2]. So, not only does this particular codebase have UB problems caused by unsafe code, the SAFETY blocks for the unsafe code are also, well, lies.

          [1] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...

          [2] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...

          • Jarred 5 hours ago
            `PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.
          • zozbot234 6 hours ago
            One potential way to solve this in a principled manner is to turn at least some "unsafe" annotations into ghost capability tokens that are explicitly threaded through the code and consistently checked by the compiler. Manufacturing the capability could itself be left as an unsafe operation, or require a runtime check of some kind.

            You already see this in some cases, for example the NonZero<T> generic type can be viewed as a T endowed with a capability or token that just says "this particular value of type T is nonzero, so the zero value is available for niche purposes". But this could be expanded a lot, especially with some AI assistance.

            • mswphd 5 hours ago
              This already happens all the time in rust, including in the standard library. The typical pattern is to define your CheckedType to be

              pub struct CheckedType(UncheckedType);

              e.g. where its inner field is private. Then, you only present safe constructors that check your invariant, and only provide methods that maintain the invariant.

              For a concrete example, String in rust is a Vec<u8> with the guarantee that the underlying bytes correspond to valid UTF8. Concretely, it is defined as

              #[derive(PartialEq, PartialOrd, Eq, Ord)] #[stable(feature = "rust1", since = "1.0.0")] #[lang = "String"] pub struct String { vec: Vec<u8>, }

              You can construct a string from a vec of bytes via

              fn from_utf8(vec: Vec<u8>) -> Result<String, _>;

              as well as the unsafe method

              unsafe fun from_utf8_unchecked(vec: Vec<u8>) -> String;

              Note here that there isn't a separate capability/token though. This is typically viewed as bad practice in rust, as you can always ignore checking a capability/token. See for example rust's mutexes Mutex<T>, which carry the data (T) that you want access to themself. So, to get access to the data, you must call .lock(). There is a similar philosophy behind Rust's `Result` type. to get data underlying it, you must handle the possibility of an error somehow (which can include panicing upon detecting the error of course).

            • 12_throw_away 6 hours ago
              Yes, or you could review the code.
              • baq 5 hours ago
                It’d only take an hour if you reviewed a million lines per hour
                • cyanydeez 5 hours ago
                  [Sorry guys, I couldn't review this code because I generated it all]
              • frde_me 5 hours ago
                Even before AI, deterministic checks by compilers are almost always better than "review the code"

                "review the code" as a solution will eventually fail and cause a problem, even pre-AI.

                • 12_throw_away 4 hours ago
                  The entire point of unsafe blocks and SAFETY comments is that they are easy for humans to find and audit, but not compiler checkable. If it can be compiler-checked by some clever token system, then ... it's just plain safe rust, and you don't need to document any special safety invariants in the first place
              • mswphd 5 hours ago
                even when you can review the code, it's good to have the compiler check for you. This is for similar reasons why it's better to have CI check correctness on each code change, vs testing the code thoroughly one time, and then being careful going forward.
        • boothby 6 hours ago
          > unsafe just means that you take responsibility for the safety of the code contained within.

          In this case it means you delegated the responsibility to a notably flaky heuristic.

        • SkiFire13 6 hours ago
          > a JavaScript interpreter

          Bun is not a Javascript interpreter. But I do see the point.

      • swatcoder 6 hours ago
        > What is the point of rewrite

        To win a news cycle.

        For the forseeable future, the AI market competition is not about which product can provide the most valuable utility to users. It's about which product can be holding the protective aura of social media and investment zeitgeist while competitors buckle under the strain from unfulfilled hype and over-leveraging.

        Utility, engineering, efficiency... these are all menial details for the winners to reluctantly iron out in 2035.

      • embedding-shape 7 hours ago
        Some correct me if I'm wrong, but it's unlikely they wrote this first initial version of Rust and will leave it unchanged as-is. What's there now is a step in a long process, not the final destination.
      • bakugo 6 hours ago
        The point is to serve as marketing for Claude. Absolutely nothing else.
      • tayo42 7 hours ago
        Rust has a ton of other features besides safe. Like exhaustive checking of enum variants and the ability to avoid using null with option and result.
        • rudedogg 7 hours ago
          Zig has these modern language features too fwiw.

          I think the goal was to do a massive rewrite for Anthropic (they acquired bun) and show that rewriting projects from lang -> lang with Claude can reduce security vulnerabilities to help with the hype for an IPO.

          I don’t use/know Rust so I can’t comment on the quality, but there was a public security review that found issues with the new Rust code: https://x.com/SwivalAgent/status/2054468328119279923

          This is an interesting experiment but I’m skeptical of any claims of success by Jarred/Anthropic due to the incentive to hype agents. There’s probably a trillion dollars at stake with the IPO. And Anthropic seems to be developing this part of their business with Mythos and the super review features.

          But I’d like to see the same experiment done on a project without so much relying on the story being success.

          • nsagent 6 hours ago
            There's a reasonable request to run the same analysis for the Zig version of the code as a comparison.

            In lieu of that, it seems the Swivel devs ran an analysis on Tigerbeetle, one of the other major Zig projects, and found only 7 medium/low priority issues:

            https://xcancel.com/SwivalAgent/status/2054063291266113994

            • matklad 1 hour ago
              To clarify, those are things an LLM considers to be issues, and LLMs can make mistakes.

              Some of those are clear false positives, others I need to revisit tomorrow to say one way or another.

    • Robdel12 8 hours ago
      Sure hope Mythos is as world beating as they claim, they’re gonna need it now.
    • vitaminCPP 8 hours ago
      We got memory safety at home !

      At home:

      > 10428

  • Jarred 15 hours ago
    Still writing the blog post about this. Will share more details.

    For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.

    • tasuki 13 hours ago
      You, nine days ago[0]:

      > I work on Bun and this is my branch

      > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

      Maybe... it wasn't such an overreaction?

      [0]: https://news.ycombinator.com/item?id=48019226

      • Tesl 3 hours ago
        I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?

        I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.

        • noelsusman 50 minutes ago
          Vibe coding a Rust rewrite of a widely used tool is basically catnip for the HN crowd.
        • vickychijwani 31 minutes ago
          The unhappiness is primarily stemming from Bun’s ownership by Anthropic - HN sees this as Anthropic using an OSS project for reckless marketing stunts.

          For the record I don’t believe it’s a stunt, it’s ridiculous to me - everyone’s just seeing what they want to see out of sheer hate for anything Anthropic does.

        • itishappy 1 hour ago
          My read is it's less the rewrite and more the messaging around the rewrite. Nine days between "you're over-reacting" and merge is surprising, to say the least. Sure will be interesting to see that blog post!
        • taejavu 2 hours ago
          My read is that it just seems a bit reckless doing a full rewrite so quickly.
        • sigmar 48 minutes ago
          posting my read (since it differs so much from the others')- there's a 'holy war' being waged by people that think LLMs shouldn't do full rewrites of software. There are various reasons people think this (think LLMs are parrots that make slop and are incapable of writing good code, have environmental concerns, or are angry that software licenses can be circumvented). I call it a 'holy war' because I think most see our current trajectory as a bit inevitable and have a strong urge to proselytize their views and chide maintainers that use LLMs in ways they don't like.

          Very similar angry comments happened with the discussions of the Chardet rewrite, next.js/vinext, and JSONata/gnata if you want to look at this in context.

      • embedding-shape 12 hours ago
        You're not alone in voicing this, another (now dead) comment did it earlier too with a bit more of an emotional response (https://news.ycombinator.com/item?id=48134229).

        Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?

        Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.

        • tasuki 11 hours ago
          Yes sure it's ok to change your mind. But don't you think the people Jarred accused of "overreacting" in retrospect didn't?
          • hombre_fatal 6 hours ago
            The top comment at that link points out how many of the sibling comments are delirious and emotional, kneejerk responding to the news rather than giving any sort of sober analysis.

            That people were overreacting with emotional meltdowns (common in AI-related threads) is perfectly compatible with the branch making enough progress to get merged.

            • wiseowise 4 hours ago
              Anyone who disagrees with me is having an emotional meltdown and obviously they're delirious AI-haters.
              • Muromec 2 hours ago
                I'm not in a cult, you are in a cult and delusional!
            • BoorishBears 5 hours ago
              This seems dishonest.

              I'm reading through the top comments next to his and don't see that. You can always find delirious and emotional takes, but those didn't dominate the discussion

              https://news.ycombinator.com/item?id=48017005

              > [...] Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.

              https://news.ycombinator.com/item?id=48017358

              Compares this to Go runtime's C to Go migration

              https://news.ycombinator.com/item?id=48017309

              Link to Github diff view

              https://news.ycombinator.com/item?id=48017505

              > I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.

          • embedding-shape 10 hours ago
            No, what we knew then is still what was known then. Today is different, and seemingly they've committed to the rewrite, so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
            • grayhatter 5 hours ago
              > so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.

              It also makes sense to have strong feelings when you're able to pattern match well enough to predict something will happen despite others trying to convince you that your predictions are incorrect.

              It's not overreacting when correctly predicting the future, just because others couldn't. In the same vein, the idea that "everyone out to get you" is not called paranoia when there are people actually out to get you. That's better called being observant.

              Some of those who predicted correctly might also have overreacted, but I believe that the majority understood that to be a blanket statement about prediction as a whole vs any specific individual reaction.

            • serial_dev 6 hours ago
              “Nobody could have seen this coming…”?

              Well apparently a lot of people did. Maybe Jarred didn’t, maybe you didn’t, but most people correctly predicted what was coming.

              • embedding-shape 6 hours ago
                See what coming?! I really don't understand what's going on here. Correctly predicted what, that Bun was being rewritten into Rust? I'm not sure anyone doubted that, all the work they did was public???

                What on earth is going on here?

                • wiseowise 4 hours ago
                  > I'm not sure anyone doubted that, all the work they did was public???

                  https://news.ycombinator.com/item?id=48019226

                  > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

                • grayhatter 5 hours ago
                  > What on earth is going on here?

                  With the nearly complete PR with the port to rust, a number of people predicted that it was going to happen. They were assured it's unlikely to happen and then they were accused of overreacting over effectively nothing. When those same people who were already upset about the rewrite, learned that their predictions the same ones that were rudely dismissed, were in fact, correct, they became upset again; this time about being lied to.

                  Correct or not, it's reasonable to conclude they were lied to. Especially given they correctly predicted the future.

                  • famouswaffles 5 hours ago
                    >Correct or not, it's reasonable to conclude they were lied to.

                    No it's not. If we were 9 days away from a human written version of this experiment then yeah it would be reasonable to conclude they were lied to, because a human written version would progress so much slower and steadier that it's very unlikely you hadn't made up most of your mind a week before merge time.

                    But it's not human written. It's months, perhaps years of work compressed into a week, where the machine can go from 'nothing is working' to 'everything is working' in a few days. There is nothing reasonable about concluding you must have been lied to when such a delta in such a short time is possible. And if people fail to see that, then perhaps the initial assertions about an emotional meltdown were not so far off after all.

                    • wiseowise 4 hours ago
                      I might surprise you, but tech projects have social part of it. Decisions like that are discussed with community. It is completely fine to not give a single shit about community, but then don't act surprised when community doesn't give a shit about you.
                      • famouswaffles 4 hours ago
                        Decisions like this are discussed however the maintainers of the project wish to discuss them. And a majority of the time, these decisions are made and discussed solely by the maintainers, so I really have no idea what you're talking about.
                • BoorishBears 5 hours ago
                  It's really simple.

                  9 days ago this is how the migration was described:

                  > I work on Bun and this is my branch

                  > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

                  > I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.

                  9 days after that comment, the rewrite has been merged to master.

                  9 days after "this is my branch" "the code doesn't work" "I'm just curious" "high chance it's thrown out"... it's merged to master.

                  -

                  Some people saw the original as an attempt to downplay the importance of the branch in response to negative feedback, rather than accurately describing what the branch represented.

                  Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.

                  Experiments graduate to production all the time, but given the timelines involved, their predictions were correct.

                  • famouswaffles 4 hours ago
                    Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.

                    There is no way a human rewrite like this wouldn't be roughly at the same stage with a 9 day delta. In that case, some of these accusations would be reasonable to make. But that is not the case here.

                    • geodel 2 hours ago
                      Thats fine if some Claude code agent made PR and committed it. No human involved, no human drama ensued.

                      People here are pointing the problem because Anthropic dude claimed, it is an experiment, tests are still failing, may go nowhere.. blah..blah.

                      • famouswaffles 1 hour ago
                        Yes because it was an experiment and tests were indeed failing at that point in time, but guess what ? When an experiment succeeds you probably don't throw away the results.
                    • BoorishBears 3 hours ago
                      You know, we used to look down on engineers who didn't realize there's more to software than the raw lines of code.
                      • famouswaffles 3 hours ago
                        You're free to look down on whoever you want. I'm free to tell you I couldn't care less, and that both replies so far just confirm how much of an emotional meltdown the reactions here really are. Your comment has managed to have nothing to do with the point I was making.
                    • fragmede 3 hours ago
                      Just because the machines can generate code that quickly doesn't mean that human thought has changed to moving faster. Everyone's had a problem they were working on, and the solution doesn't come sitting at the desk staring at the code, but three days later in the shower, eureka! hits. Just because machines are writing code hasn't changed the underlying human thought speed substrate. That's why people see nine days as too fast, even in this sped up AI era.
                      • famouswaffles 3 hours ago
                        Human speed thought doesn't matter here because it's not human reviewed. The code was generated. It exists and it (now) works to the extent they're satisfied with going through with a canary release. Going on about about '9 days' is working with a mental model that simply does not apply here. That is my point.

                        If you think there should be human review or that there should have been a lot more human collaboration, that's one thing but accusing Jarred of lying about his intentions is another thing entirely, and one where '9 days' is not remotely the proof people think it is in this situation.

                        • fragmede 2 hours ago
                          I'm not sure where I accused Jarred of lying. All I'm saying is that 9 days is not very long.
                          • famouswaffles 1 hour ago
                            The chain we're on and the comments I originally responded to have such concerns. And I mean, if it's not going to be reviewed by humans then really what makes 9 days too soon ? Should the code just sit there collecting dust until everyone agrees an arbitrary amount of time has passed ?
                    • wiseowise 4 hours ago
                      > Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.

                      You need to lay off the kool-aid.

                      • famouswaffles 4 hours ago
                        Making a factual statement is drinking Koolaid ? Okay
            • tasuki 9 hours ago
              Maybe the people who "were overreacting" just happened to have more foresight than you and me? Perhaps they saw where this was heading, and that led to their "overreaction"?
              • embedding-shape 8 hours ago
                In what way? Foresight about what? It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
                • tasuki 7 hours ago
                  > It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.

                  Yes - I think I didn't explain my feelings well. But, now I understood them finally! So:

                  It was an experiment back then. Now, nine days and a million lines later, it suddenly isn't an experiment anymore? I understand there's a comprehensive test suite (yay!) but still... a million-line diff in nine days still sounds like an experiment to me.

                • rcxdude 7 hours ago
                  The difference is an assumption of good faith, for the most part, and that is to some extent modulated by how reasonable people believe a large scale LLM and/or rust rewrite is a reasonable idea.
                • wiseowise 4 hours ago
                  Why are you defending them so much, lol. It's no longer an underdog open source project fighting for survival, it's a freaking Anthropic subsidiary that has been bought for hundreds of millions of dollars.
          • weird-eye-issue 1 hour ago
            Who cares? Go see a therapist
        • buu700 3 hours ago
          This actually happened to me a couple months ago. Started a Rust rewrite of a project as an experiment, then a few weeks later it was presented to the team and promoted to mainline.

          Although in that case the language change was almost incidental — the rewrite was very much not a straight 1:1 port, but more of a substantive architectural overhaul and longstanding tech debt cleanup; Rust was just one of many tools and design decisions that helped get the best possible end result. There were also various reasons it made sense to attempt a rewrite within that particular window of time.

          The upshot is we've ended up with a substantially stronger QA posture, a much higher-quality and more maintainable codebase, and an extremely positive audit report by a group that was brought in to review the project. There were some early kinks to work out, but the longer we've lived in this version of code the more it's proven itself to be a stronger foundation than its predecessor.

          Of course, Bun is its own thing and all circumstances are unique. I have no idea how that rewrite was approached, whether it was the right decision, or how it will ultimately prove itself. Just saying the shift from "experiment" to "official new direction" is normal and credible, and that I'd give it some time to see how it handles contact with reality before passing judgement. If it's truly a disaster, nothing's stopping them from reversing course and backporting any new changes to the old Zig codebase.

        • wiseowise 4 hours ago
          It's a high profile open source project. While Bun/Jarred don't owe anything to anyone, nobody should be surprised when decisions like these result in strong backlash.

          Imagine if Guido or Linus said a couple of days ago that they're just experimenting and then submitted and merged complete machine-assisted rewrite of CPython or Linux in Rust.

      • imenani 13 hours ago
        The author discussed this here four days ago

        https://news.ycombinator.com/item?id=48077663

      • dannersy 10 hours ago
        I was down voted pretty hard for calling this comment out. I would say I'm surprised but honestly? Completely predictable.
      • camdenreslink 13 hours ago
        Yea, what the heck.
    • pulsartwin 15 hours ago
      Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?
      • arm32 10 hours ago
        That's way too smart, safe and sensible.
      • aapoalas 14 hours ago
        They have a PR (~~closed by GitHub bot as AI slop, ironically~~ this was wrong info, it was apparently closed by Jarred himself as it missed a conversion or some 20 Zig files to Rust) to remove the Zig code.

        I guess the answer is "no".

    • dlopes7 45 minutes ago
      I bet the blog post will make no mention of pressure from anthropic to do this and instead will celebrate the fact that “it passes all tests”, of course omitting how many tests were modified to forcibly pass
      • awwaiid 24 minutes ago
        Was there pressure to do this, or freedom to do this? If I had an unlimited token budget I'd probably try all sorts of crazy things. Also you (one) can read the tests and see that they weren't modified to forcibly pass.
    • janice1999 15 hours ago
      I'm curious how much this would cost a paying customer. Can you please give us an estimate?
      • nzoschke 6 hours ago
        Great question and I'd love the answer.

        I bet the answer is industry changing even if the token cost is high.

        This work was impossibly expensive in terms of people hours and time before. Architectural planning, engineering alignment and politics, phased engineering that gets interrupted by changing priorities.

        That it's possible to do R&D, the port, and get 99.X test passing in less than 2 weeks is so much more efficient for the humans.

    • randypewick 14 hours ago
      Did you (or will you) implement some kind of e2e (fuzzy?) testing comparing the two binaries? Do you have particular plans regarding the release of this (for ex to not break users workflows or things like that)?
    • calmoo 14 hours ago
      Will this likely fix stability issues in the Bun Workers API? https://bun.com/docs/runtime/workers
    • halifaxbeard 12 hours ago
      Any plans to issue a CVE for this HTTP request smuggling attack vector fixed in the latest bun release?

      https://github.com/oven-sh/bun/issues/29732

      • classicposter 11 hours ago
        https://github.com/oven-sh/bun/security

        Surprisingly, they appear to have not disclosed any vulnerabilities whatsoever. It's likely there have been numerous vulnerabilities in the past, but they are all being ignored.

        https://x.com/DavidSherret/status/2031432509301428644

        • halifaxbeard 10 hours ago
          This is really poor form given that Anthropic is going around getting all kinds of public goodwill for finding CVEs in other people’s products.
          • shimman 4 hours ago
            Yeah! Why would the company that stands to make themselves look better in front of an IPO do such a thing?! Next thing you're going to tell me was that this whole rewrite was another marketing ploy to help potentially turn themselves in multi-millionaires!
      • grayhatter 5 hours ago
        maybe you should ask on the issue directly?
    • andrepd 4 hours ago
      > The codebase is otherwise largely the same. The same architecture, the same data structures.

      How can you possibly verify this, if a 1M line patch was written over 7 days? It's at best a hunch (vibes?), and at worst a lie.

      • cwyers 1 hour ago
        Because it passes the existing test suite? And he knows what's in the test suite?
        • thearrow 54 minutes ago
          The test suite explicitly verifies the architecture and the data structures used? Depends on the suite, I suppose.
    • underdeserver 5 hours ago
      Is writing the blog post taking longer than the rewrite
    • eddiewithzato 15 hours ago
      I can hope this will lead to little to no memory issues in using bun as a web server
      • embedding-shape 15 hours ago
        I'd be surprised if they could eliminate memory issues completely, especially considering the amount of `unsafe` the codebase seems to contain.

            git rev-parse HEAD && ag "unsafe" src | wc -l
            19d8ade2c6c1f0eeae50bd9d7f2a4bf4a2551557
            14865
        • e12e 6 hours ago
          On the other hand - now it should be possible to tackle some of those one by one?
          • embedding-shape 5 hours ago
            Oh yes, I don't doubt they'd eventually be able to seriously reduce that number, probably to a handful of places. I don't doubt the strategy employed here, rewriting it keeping it similar, then slowly change it. I do still doubt they'd be able to completely eliminate memory issues in the end regardless.
        • Narishma 10 hours ago
          Doesn't that count anything that has 'unsafe' in it, not just the keyword?
          • embedding-shape 10 hours ago
            It does, see the sibling comment made about an hour before yours, fixing that issue has marginal difference.
        • davidghowell 12 hours ago
          That's picking up all the "bunsafety" references in there :P
          • embedding-shape 12 hours ago
            When I read what you wrote, I was like "of course, duh, I'm stupid" but running `ag "unsafe" src | grep -i "bunsafety"` it doesn't seem to be the case actually, I see zero bunsafety mentions from it.

            However, `ag unsafe` does over-count anyways, just in a different way, matching stuff like SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION and _unsafe_ptr_do_not_use and others.

            Better command with same previous commit, `ag -w unsafe src | wc -l`, reports 13914 "unsafe" usages now, slightly better but pretty awful still.

            • logicprog 10 hours ago
              My understanding is that that's because they were trying to do a structurally homologous port from Zig to Rust, precisely to keep their mental model and not change "too much" at once, and then they plan to refactor to make it safe Rust later.
              • rpearl 5 hours ago
                it's clear that as of the time of this merge, no human has read any appreciable fraction of current mainline bun, so it's not particularly clear how much of a "mental model" exists anymore.
    • dolmen 14 hours ago
      Does that mean that from now your coding agents working on the Bun codebase are themselves running on that rust-Bun runtime?
    • cyanydeez 5 hours ago
      So a question you should answer: Couldn't you just train the super SOTA model on fixing those issues instead of porting it?
    • fatata123 13 hours ago
      [dead]
    • LucidLynx 14 hours ago
      [flagged]
      • embedding-shape 14 hours ago
        Coming on a bit strong no? Isn't it possible one could do an experiment almost two weeks ago, then by today the experiment concluded and now you've made a choice?

        Did you think "experiment" meant 100% this will be thrown away? Wouldn't make much sense to experiment with something you know you'll throw away, unless you have some specific reason for it.

      • Klonoar 14 hours ago
        You don’t speak for most of us.
  • therepanic 8 hours ago
    About 9 days ago, Jarred wrote that it was far from certain that this would merge and that it was an overreaction. Ironic.
    • wiseowise 4 hours ago
      Model open source leadership. Imagine the meltdown if Linus says Linux kernel is not going to be rewritten and then one day wakes up and merges full machine-assisted rewrite in Rust.
      • pessimizer 4 hours ago
        As long as it was still GPL and it wasn't just license washing, I'd be elated.
    • noosphr 8 hours ago
      When you don't own your company any more anything you say can be safely ignored. It was obvious that the token spend will need to be justified.
      • simlevesque 8 hours ago
        They've been shady since day one, claiming wild performance improvement compared to their competitors and never proving any of them.
        • shimman 6 hours ago
          You don't think installing NPM packages 2 seconds faster, something most working devs do one a month, to be amazing?
    • rcxdude 7 hours ago
      I mean, that doesn't exclude the outcome that it gets merged.
    • simlevesque 8 hours ago
      [flagged]
      • dang 7 hours ago
        Edit: my mistake. Sorry for misreading.

        You've crossed into personal attack with this, and that's not allowed here. Please don't.

        https://news.ycombinator.com/newsguidelines.html

        • gib444 7 hours ago
          Which persons were attacked by their comment? The "them" is confusing me – I interpreted it as Bun the organisation / Anthropic?
          • simlevesque 6 hours ago
            I'm confused too as to how my comment can be interpreted as a personal attack on anyone.

            I was indeed talking about Bun as a whole and not any particular person. I'd even include the Bun community in my "them".

            But I'll take dang's word for it and will watch what I say.

            • dang 5 hours ago
              Ah, I thought you referring to a person. I'm sorry for misreading you.

              It's still a bad HN comment, I'm afraid (denunciatory rather than curious, for one thing), but it wasn't a personal attack and not a post that would normally clear the bar for a mod reply.

      • Yeroc 8 hours ago
        I think Jarred's response at the time was intended to cool the ridiculous hype when the branch first appeared!
        • michaelmrose 8 hours ago
          [flagged]
          • hxtk 7 hours ago
            I don't know if the intent was to deceive, but the comments certainly had the effect of deceiving me. I came away from that first thread thinking, "Ah, so the 'story' here is that someone on the project tried an experiment on a branch that they probably should have put in a branch on their personal fork." I was no longer thinking it was a serious possibility that an AI rewrite would get merged.
  • losvedir 8 hours ago
    Wow. This is going to be interesting to follow. There's absolutely no way any of this code was reviewed, but maybe we're in a post-human world now where you can trust the models to write and review the code. This is like Gastown but on a higher profile project. Will be fascinating to see how this project is able to add new features going forward (or even _if_ it will be able to).

    Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code? I'm more than slightly worried about using Bun going forward myself, but I'm not sure to what extent that applies to using Claude as well.

    • rafram 7 hours ago
      > you can trust the models to write and review the code

      You definitely cannot!

      • operatingthetan 7 hours ago
        Reminds me of going on linkedin and seeing all these sales and product people who are talking big game about engineering now. Well yeah they are definitely producing something but not sure I'd call it "engineering."
      • gmueckl 7 hours ago
        You can trust them to flag some things during review that may or may not be relevant. But just like with human review and unit testing, you cannot guarantee the absence of bugs after an LLM code review. It's just another set of (virtual) eyeballs.
        • rafram 4 hours ago
          I trust them somewhat to flag bugs. I don't trust them to produce clean, maintainable code - even code maintainable by the LLM itself. Any sufficiently complex LLM changeset can be assumed to contain duplicated logic, method scope creep, and code changes without accompanying documentation changes that the model often will not catch no matter how many rounds of review you run. If those issues make it into a commit, the next time you ask the LLM to update some of the functionality that it introduced earlier, bugs will creep in.
          • solidasparagus 2 hours ago
            I find that documentation creep is wildly better in AI coded environments than human ones. You can deterministic force a documentation sync process on every PR, documentation rot has gotten way better.
    • bhaak 7 hours ago
      It passed all the tests.

      If you can't trust your test suite to catch an automatic language translation you shouldn't trust it at all. :)

      • user142 6 hours ago
        Tests can only prove the presence of bugs, but not their absence. If the AI can access the tests, it can easily make them pass by just adding additional if statements. It doesn't mean the code is actually correct.
      • andrewflnr 5 hours ago
        What if we only trusted the test suite a reasonable amount, instead of pretending trust must either be blindly total or nonexistent?
      • debugnik 6 hours ago
        It also modified many of the tests to make them pass in mischievous ways. You can't trust a test suite to catch regressions if the new version doesn't use the same test suite.
      • data-ottawa 4 hours ago
        A wise teacher once told me a good programmer looks both ways when crossing a one way street.
    • torben-friis 7 hours ago
      Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code?

      It seems to be used by anthropic as a way to shift the discussion window into it being acceptable that you yolomerge millions of lines.

      • darknoon 7 hours ago
        the `claude` binary is essentially a packed copy of bun + the js code, so this will replace the native runtime part of claude code.
    • SwellJoe 6 hours ago
      How's the test suite?
  • xiphias2 17 hours ago
    I'm actually excited for somebody trying experimenting with automated translation, but I'm afraid this will be lots of backwards compatibility issues.

    I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.

    The only silver lining I see is that the server side JS community for some reason is already used to breakages all the time.

    • rohitpaulk 14 hours ago
      The whole idea that my RUNTIME contains code that a single human hasn't looked at does make me uncomfortable, but if this actually works without a ton of issues it's pretty remarkable.
      • tempaccount420 10 hours ago
        Don't worry, no one reviewed open source code before AI either. Basically nothing changed about the trust model.
        • mort96 6 hours ago
          The person who wrote the code reviewed it as a part of writing it and going through the PR process.
        • runarberg 9 hours ago
          The speed of the change did. This is the “climate has always been changing” argument climate deniers make. It is a true statement which is still a lie by omission. Climate deniers purposely ignore that the climate has never changed at the current rate, and AI-stans neglect to mention that before AI nobody was merging a 1M+ lines of code in one go.
    • tarruda 16 hours ago
      > I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves

      Not sure if these decisions were made by the LLM, but I've always felt that Claude is more prone to doing "shady stuff" like modifying tests than finding correct solutions to problems.

      GPT/Codex is more honest in this regard.

      • InsideOutSanta 16 hours ago
        Yeah, Claude is very creative in finding ways of "solving" problems that go against what the user probably intended.

        Having said that, after looking at some of the test changes, they seem to be minor things, like changing timeouts, not changing the actual intended semantics of the tests. But it's too much code to review everything, so I might be completely wrong about that, and in real-world usage, even minor changes like these will cause issues.

    • rzmmm 16 hours ago
      I doubt it will end up as stable release very soon, but I'm happy to be proven wrong. I have some skepticism about this whole rewrite, Jarred Sumner has enormous internet following and it feels like an ad.
      • fragmede 15 hours ago
        How do you wash to define ad, and why does it matter? If I tell you I had lunch, I mean. okay, great. If I tell you I had a delicious Coca-Cola with my lunch, sure. If I happen to work at Coca-Cola, does that now become an ad? And what level does it become an issue? And I what is the issue?
        • roxolotl 14 hours ago
          If you work for Coca-Cola then yea there’s reason to question your intent even if simply because you aren’t objective due to your proximity to Coca-Cola.
    • q3k 16 hours ago
      > solving the ,,tests not pass'' problem by changing the tests themselves

      https://github.com/oven-sh/bun/pull/30412/changes/68a34bf8ed...

      This is great! Just add a random sleep(1) to a test, don't worry about it, it's going to be fine!

      • onli 15 hours ago
        On the other hand, the sleep fits better to the test description, "should allow reading stdout after a few milliseconds". Even if 1 != 'a few'. It's possible the part of the commit reverted here, https://github.com/oven-sh/bun/commit/a42bf70139980c4d13cc55..., defeated the purpose of the test by removing the sleep. I don't think adding the sleep back is an example of AI cheating.

        Strange test though either way.

      • robryan 15 hours ago
        To be fair the commit message `revert proc.exited change in spawn.test.ts` suggests the sleep was there originally.
    • solid_fuel 7 hours ago
      I wish I could take a look through the tests to see if anything substantial actually changed, but I can't even get github to load the diffs for me.
    • Imustaskforhelp 16 hours ago
      > I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.

      Wow, This is definitely quite something for sure.

      Can jarred comment about if he has read the commits or not too or respond to your comment, this has basically made me lose the small faith I had in what bun is doing if it turns out to be correct.

      • xiphias2 16 hours ago
        It's OK, we'll see how it goes. He and Antropic are giving it us for free, and nowdays just forking the old version is easy if a project needs that. Even maintenance is much easier using LLMs.

        I'm happy it's not a project I'm depending on, but a large enough project had to try this at some point so that we all can learn from how it goes.

        I think this is why Antropic bought bun, so that they can sell big code translation as a feature for all the banks with COBOL code that they want to get rid of for a long time.

        Still, those banks / enterprises won't appreciate the number of unit test changes.

        And I agree with another comment that Codex xhigh is much better for these kinds of tasks, but still hard on this kind of scale.

      • Tadpole9181 1 hour ago
        Jared has commented on this elsewhere in the thread, basically claiming the parent you replied to is outright lying: it has removed no tests and has not meaningfully changed annotations to reduce coverage of effectiveness. It added additional tests and made a few changes to hard coded values due to differences in, as an example, how LLVM and Zig handle stack frames.

        The MR is right there, linked at the top of this page. You can check who is telling the truth.

        That said, I don't know how anyone is actually claiming to have done that. All day, the size of the MR makes the diff take too long to load and GitHub dies. I'll have to pull it later to check myself.

    • mohsen1 11 hours ago
      in tsz[0] 100% of tests pass yet I have a ton of bugs. I don't think any software out there is fully tested really. I'm experimenting this this idea as well. So far learned a ton.

      I'm convinced the future of writing code is heavily LLM assisted

      [0] https://tsz.dev

    • Jarred 15 hours ago
      > it's basically solving the ,,tests not pass'' problem by changing the tests themselves.

      False.

      0 test files were deleted. 0 pre-existing tests were skipped, todo’d, or had assertions removed. 5 new tests were added in test.skip/test.todo state to track known not-yet-fixed bugs in the port that lacked test coverage before.

      The merge changed 28 test files in total.

      +1,312 lines

      −141 lines

      Most of that +1,312 is new tests.

      The depth-of-recursion tests for TOML/JSONC parsers went from 25_000 -> 200_000 because Rust’s smaller stack frames (LLVM lifetime annotations let the optimizer reuse stack slots) mean 25k levels no longer reaches the 18 MB stack on Windows.

    • oleggromov 6 hours ago
      [dead]
  • sltr 13 hours ago
    I will move the handful of my projects that use Bun to something else. I don't trust governance that permits this kind of reckless change.
    • steve_adams_86 2 hours ago
      Deno is amazing and doesn't get the love it deserves, in my opinion.

      It doesn't need to be rewritten because it was written well in the first place.

    • 4b11b4 13 hours ago
      Same, just gonna stick with node. On the other hand, the trial by fire will be interesting to see... long term I can only imagine the kinks will surely work themselves out
      • szmarczak 12 hours ago
        • xydone 11 hours ago
          This is a PR that has been getting reviewed since the end of January. The Bun port branch was created 9 days ago.
          • tasuki 6 hours ago
            Yes, reviewed since January, has almost 400 comments, and 7 (seven!) approvals from core nodejs contributors.
        • Philpax 6 hours ago
          I don't understand the point you're trying to make here.
  • weraK10 8 hours ago
    As an educational thread, see this one from a week ago where Jarred again deflects from a merge decision and legions of foot soldiers attack anyone who predicted the impending merge:

    https://news.ycombinator.com/item?id=48073680

    Didn't age well, did it?

    • progforlyfe 1 hour ago
      From "This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely." and what seems to amount to some experimental curiosity -- to merging the whole thing in 10 days!? This seems really crazy.
    • wiseowise 4 hours ago
      It'll never cease to amaze me how many bootlickers are out of there that don't really care which boot to lick.
  • eqvinox 16 hours ago
    If this goes wrong even in the slightest, the ridicule about a drug dealer getting high on their own supply will be neverending and grim.
    • teterphiel 15 hours ago
      not enough people are emotionally prepared for if it’s not going wrong even in the slightest
      • janice1999 15 hours ago
        It's going to work for the most part. Most people know that. It's a file by file, mostly function by function, conversion from one low level language to another with a very large test suite (with lots of Rust unsafe to work around differences). I've done that for C tools and it's fine, with some obscure edge cases here and there. The challenges are going to be making the new, very ugly, alien codebase idiomatic Rust in future and adding features or debugging the complex issues. I wish the developers luck. They're in for a slog.
        • sesm 9 hours ago
          Just to clarify, you did this for C tools using LLMs or using deterministic conversion tools?
      • noobermin 2 hours ago
        I think given the novelty of this, a lot of eyes will be on it, so a lot of issues will be dealt with out of the gate. The problem will be when smaller projects that aren't in the spotlight think it's safe too and then do stuff like this after being encouraged by bun, and for those projects then lots of bugs will just remain unfixed. Basically a nation state adversary's wildest dreams came true today.
        • deadbabe 1 hour ago
          If that scenario happens it just means the collapse will be slower but still inevitable as anecdotes pile up and reach critical mass of common knowledge.
          • noobermin 1 hour ago
            Yeah. I'm just suggesting bun won't blow up spectacularly as antiai people are expecting it too.
      • debugnik 15 hours ago
        Having seen some of the diffs, it's already going wrong in my view.
        • surajrmal 12 hours ago
          If most of the glaring problems are addressed (massive unsafe usage), and metrics show improvement (less crashes), then did it really go wrong? The fact the code is not idiomatic is less interesting, because that can be addressed incrementally. Let's wait 3 months and reflect.
          • debugnik 11 hours ago
            I'm thinking regressions and broken tests. Bun is already known to segfault a lot and their existing tests were rather lackluster, the Rust port being just as unsafe would be the least of their problems.
            • ajyoon 8 hours ago
              This assumes that the memory safety bugs in the unsafe Rust port are the same as the Zig codebase. A total rewrite with so little review is virtually guaranteed to introduce many new bugs which very well may be more severe than the old bugs.
        • adityashankar 15 hours ago
          Curious can you elaborate on this?
      • kgwxd 5 hours ago
        I expect it will be just fine. It's like bragging about getting the words right on a mental health exam. AI was given the answer, it just repeated it back in a slightly different format. Even a stupid human could have done that.
      • happytoexplain 12 hours ago
        However, you can never prove that it hasn't gone wrong, because there are so many long-form problems with software (quiet bugs, maintainability issues, etc). This creates FUD.
    • sesm 9 hours ago
      Wasn't looking at leaked Claude Code source already enough for the ridicule?
    • whateveracct 6 hours ago
      they are already high on their own supply

      did you read their Mythos paper? they're anthropomorphizing it like crazy. Maybe it's just cheap heat, but if they really believe the LLM is conscious..wew

  • sensanaty 15 hours ago
    Love seeing the tests themselves getting modified, with random `sleep(1)` thrown around in a few of them. This bodes well, I pray some idiot at some large AI co actually ends up using this garbage in prod
    • dolmen 13 hours ago
      Claude Code uses Bun as its runtime.

      If this has been merged, I expect that Bun-rust is good enough to power Anthropic's internal agents to do live testing.

      • merlindru 10 hours ago
        Jarred had tweeted that they're using the rust version internally with Claude Code
        • noobermin 2 hours ago
          This era is hilarious. I just wish I didn't have to rely on code written by these idiots.
  • vermilingua 4 hours ago
    Having just migrated all my teams repos to Bun, I feel… stupid. I was already feeling a little nervous by the time of the acquisition but this is pretty rough.
  • wesselbindt 4 hours ago
    This kind of frivolous nonsense disqualifies bun from ever being a serious option to me. I'm not building any kind of software used in a professional setting on 1M lines of unreviewed code.
    • nicce 4 hours ago
      Odd take. Bun was not option for me because or Zig. There was no security. Issue tracker has 3000 issues about segfaults. Now I might actually reconsider.
      • steve_adams_86 1 hour ago
        > There was no security

        >1M lines of un-reviewed code are secure?

      • wesselbindt 4 hours ago
        I don't believe you actually think it's odd to not want to run unreviewed code in prod. I accept that you might disagree, but I don't believe this is a take you haven't heard a million times before.
  • asciimoo 12 hours ago
    Regardless the outcome, this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better. I hope the zig/dev community forks the project and continues the development. I'd rather use the fork than this project that has sacrificed its contributors for marketing purposes.
    • embedding-shape 12 hours ago
      > this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better.

      What? How?

      You contribute to projects run by others with the understanding that others run the project, is this not the default assumption others have too when contributing to FOSS?

      Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore? In my mind, pretty clear it wouldn't, I'm only a contributor after all, not the maintainer or the person running the project.

      • asciimoo 12 hours ago
        > Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore?

        No, the big difference is that the described scenario does not require getting familiar with a new 1M LoC codebase written in a different language to be able to continue contributing to the project.

        • embedding-shape 12 hours ago
          For who? What you say is true for everyone who doesn't know Rust (before Zig), and not true for everyone else, same as it always is been, for every single FOSS project out there.

          So it's disrespectful because before you could contribute, but because of the direction of the project, you no longer can?

          Does that also means it'd be disrespectful to make projects more complicated and complex, because maybe someone who contributed initially don't know these new concepts, so introducing those would require this individual to learn about those things?

          All of this still sounds like entitlement to me. Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better isn't disrespectful to anyone else, you're not forced to having to contribute to any FOSS projects.

          • asciimoo 11 hours ago
            > For who? What you say is true for everyone who doesn't know Rust (before Zig), and not true for everyone else, same as it always is been, for every single FOSS project out there.

            Even if you are fluent in rust, it is going to require significant efforts to contribute to a new 1M LoC codebase.

            > Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better

            This is so far from the reality. The power of open source is coming from the contributors. Contributors are the most valuable assets of an open source project - without them most of the free tools you use would be significantly worse - including bun. The reason my open source projects got somewhat successful is the community that formed around the projects. And, it is hard to create a community when you give contributors no chance to participate in the projects direction, especially in such a critical decision that has enormous consequences.

            • embedding-shape 11 hours ago
              > Even if you are fluent in rust, it is going to require significant efforts to contribute to a new 1M LoC codebase.

              Of course, but this is true for any project or any language, can hardly be disrespectful of me to chose Clojure just because you don't happen to know it? That sounds crazy to me.

              > Contributors are the most valuable assets of an open source project

              You're talking about something else. Open source is literally about "This code has a specific license that allows you to do X" where X and Y differs by the license. Contributors or not matters squat if some open source project is valuable or not.

              Don't mix concerns here, you're talking about "open development" or something else, not specifically open source.

              Sure it's hard to create a community and get contributors and what not. But a maintainer choosing a different language and people feel that being "disrespectful" instead of just "stupid" or "dumb"? No, give me a break, you run your projects your way, and let others run theirs that way, they're not made for you, they just happen to be available to you because someone was nice enough to make it so. Don't spoil that by acting so entitled about how they should maintain and develop their project.

              • asciimoo 11 hours ago
                > Of course, but this is true for any project or any language, can hardly be disrespectful of me to chose Clojure just because you don't happen to know it?

                Nobody said that the problem is not knowing rust. The problem is changing the whole stack of a project overnight. This requires significant effort to get familiar with, even if a contributor have all the experience in the world with the new stack.

                > Don't mix concerns here, you're talking about "open development"

                Call it however you want, bun could not be the tool it is without its >800 contributors.

                • hombre_fatal 5 hours ago
                  I think most maintainers would rather you not contribute to their project if your contribution comes with the idea in your head that you're now a stakeholder who has some share in the project's technical direction.
                  • pessimizer 3 hours ago
                    Of course they're a stakeholder. They've made an investment of time and effort, and they're hoping that it will pay off. The question is whether a maintainer will respect that.

                    If you want to maintain sole ownership of something that >800 people contributed to, that reflects on you. People will judge you. Most maintainers would feel obligated to concede some control. But LLMs have intentionally aimed to devalue programming, so this transition is totally consistent with the new ownership. And it may be wildly successful, because they've got an unlimited supply of tokens for the foreseeable future.

                    But I'd say the opposite: Most maintainers would feel blessed to have a lot of contributors so invested that they felt a need to have a say in the direction of the project.

              • wiseowise 4 hours ago
                > No, give me a break, you run your projects your way, and let others run theirs that way, they're not made for you, they just happen to be available to you because someone was nice enough to make it so. Don't spoil that by acting so entitled about how they should maintain and develop their project.

                Well in this case Jarred and Bun can run their project their way, and since they're not made for me, they can just happen to be available to someone else like Claude code and they can stay in their happy read-only land.

                > Don't spoil that by acting so entitled about how they should maintain and develop their project.

                Are you sure you even understand what entitled means?

          • wiseowise 4 hours ago
            > Open source literally isn't about you, let people run their projects as they so wish, them making choices they think are better isn't disrespectful to anyone else, you're not forced to having to contribute to any FOSS projects.

            Tell me you've never worked on any meaningful OSS project.

            Good luck to Bun, if I was in any of its contributors list, and not on Anthropic's payroll, I'd say goodbye and never touch the project with a ten foot pole. And I say this as an honest feedback, save your "don't let the door hit you on the way out".

    • tasn 12 hours ago
      How is that different (in this sense) to any "slower" rewrites or other significant changes?
      • asciimoo 12 hours ago
        The difference is exactly the speed. Slowly transitioning from one thing to another gives the opportunity to contributors to get involved in the process.
        • adampunk 3 hours ago
          So? Keep up.

          Just because some set of hypothetical contributors want a slow-moving target and the maintainers want to be on Rust now, I'm supposed to be mad at the maintainers? Why?

  • perching_aix 16 hours ago
    PR so thick, the page failed to load the first time I opened it, and the comments still continue to fail to load. Absolutely hilarious. Though that may be just GitHub having a normal one, hard to tell these days.

    1 009 257 lines added

    4024 lines removed

    6755 commits

    2188 files touched

    I haven't the slightest clue how anyone would even remotely hope to review this. I guess by just using even more AI? Or maybe by throwing some über hardcore lint pass onto it? It really seems like more an exercise in risk assessment than code review.

    • 12_throw_away 7 hours ago
      The maddening thing is that there's a right way to do this if you have the patience and professionalism to do so. It requires building a bit of scaffolding (feature flags, cross-language calling support, harnesses for shadow testing, etc.), then you ship-of-theseus the codebase incrementally. This is not even incompatible with LLM-assistance, plus it breaks the thing up into smaller, reviewable changes that don't break your diff tool!

      However, doing it the right way takes a bit more time, involves community feedback, and doesn't produce headlines about huge codebases being rewritten by LLMs in just a few days, so ...

      • hombre_fatal 6 hours ago
        There is never a right way, only trade offs.

        The thing about being a Monday morning quarterback is that you can always claim you would have used even more caution and process.

        • 12_throw_away 5 hours ago
          > you can always claim you would have used even more caution and process.

          Well, specifically, my claim is that any serious professional in this industry would have done so. But we're essentially in agreement, in the sense that yes, I am allowed to make this claim, and in fact already did, in the comment you are replying to.

          EDIT: Actually I've been thinking about this a bit more. The thing about commenting on something that someone did is that you must always comment on it after they did it, otherwise it wasn't "something they did." However, being a "Monday morning quarterback", as I understand it in this context, means "criticism of someone's actions afterwards", so it would appear that I am doing that. I also understand this phrase to have a negative connotation, and I would hate to connote negatively in this otherwise very positive community. Quite a dilemma! Glad I have my life coach LLM to help me sort all this out.

        • wiseowise 4 hours ago
          > There is never a right way, only trade offs.

          There is a right way, especially when you have a community.

        • ajyoon 2 hours ago
          Can you cite a single software project with so many users which did a language migration in a more cavalier way?
      • kccqzy 1 hour ago
        Ah yes, you are actually describing fish shell's Rust rewrite. They specifically called it The Fish Of Theseus which is of course a reference to the ship of Theseus.

        https://fishshell.com/blog/rustport/

      • chamomeal 5 hours ago
        I mean it's definitely at least partially a PR stunt
    • fg137 1 hour ago
      Bun is owned by Anthropic.

      Hopefully that answers all your questions.

    • chrysoprace 15 hours ago
      Not sure there is much of a point in reviewing a port of this size. It has >1000 instances of `unsafe` and uses the same patterns as the zig code according to Jarred. It feels like a vibe-ported version of what the TypeScript team are doing porting from TypeScript > Go with codemods.
    • 12345hn6789 11 hours ago
      Humans are no longer maintaining bun. There is no good faith argument that can claim a human understands this rewrite
  • matt3210 6 hours ago
    Anthropic buys bun, makes them spend tokens to convert to rust, nobody understands it anymore, locked into ai now
  • alfanick 8 hours ago
    I'm confused. Never heard of Bun until a few days ago here on HN. It's some nodejs wrapper thingy, written in Zig, and someone decided to use LLM to rewrite it in Rust. Is this a big deal? Who is even using this software? Why is this big?
    • tshaddox 8 hours ago
      Bun isn't a node.js wrapper. It's an alternative to node.js that sits at roughly the same spot in the stack.

      Node.js is a distribution of the V8 JavaScript engine (the thing that executes JavaScript in the Chrome browser), along with a bunch of standard library code written mostly in C++.

      Bun is a distribution of the JavaScriptCore engine (the thing that executes JavaScript in the Safari browser), along with a bunch of standard library code written mostly in Zig (and now Rust). Bun's standard library is in many cases compatible with or inspired by the Node.js standard library, but with some changes for convenience and performance.

      • serial_dev 6 hours ago
        Answering “who is even using this software” is unfortunately missing in your answer. I am honestly curious. I’ve never seen it “in the wild” (in job descriptions, hearing from past colleagues, meetups etc). Only place I heard about it is HN and Twitter.
        • cleaning 5 hours ago
          It's primarily used by people who tend to sit on the cutting edge e.g. startups and developers who follow the latest tools. It's not well worn enough to be adopted by slower enterprise environments. Bun is well known within web development but if you don't work in the space and don't keep up to date with modern tooling it's unlikely you would have awareness of it.
        • lightamulet 6 hours ago
          I'd say the most prominent user (and the reason why Anthropic acquired Bun) is Claude Code
        • fg137 1 hour ago
          To my limited knowledge, "serious" production systems most likely use Node.js instead of any alternatives, and I don't see any movement towards adopting Bun.
        • 2c2c2c 4 hours ago
          notably anthropic on a multibillion revenue product
    • konart 8 hours ago
      Rust vs Zig "wars" etc.

      Also at some point Bun was acquired by Anthropic. And some people feared that this will greatly influence Bun's development.

      • carllerche 6 hours ago
        I don't think Rust vs. Zig has anything to do with why people are talking about this. It is a large piece of "real software" that underwent a full language transition in ~1 week using LLMs. That is a big deal regardless of the language and will be a case study regardless of how it turns out.
        • ryanschaefer 6 hours ago
          It’s a watershed moment. Basically one of the most controlled applications of an LLM into a robust codebase without regard for the implications of doing so.

          Anthropic needed something like this and it must proceed flawlessly. My guess is that nothing will explicitly break. But that’s the difficulty of LLM generated code: nothing breaks. You sit with a codebase that swallows all errors and appears to be working. Silently failing makes debugging performance and behavior much harder.

      • binary132 8 hours ago
        which was obviously a reasonable reaction.
    • applfanboysbgon 8 hours ago
      Bun is not a node.js wrapper, it is a node.js alternative. It had non-trivial adoption, tens of thousands of stars on github for whatever that's worth (before the AI spam took over stars). It was then purchased by Anthropic and now we're witnessing open source software that people used be sacrificed to the altar of LLM marketing hype.
    • hxtk 8 hours ago
      I think relatively few people are probably running Bun in production, but as a dependency management system and bundler for the JavaScript ecosystem, it's similar to `uv` from the Python ecosystem in how much faster it is compared to the most popular alternatives so it's fairly popular in that space.
      • mgrandl 7 hours ago
        PNPM is just as fast and much more reliable.
        • hzmi 6 hours ago
          Agree with this. Been a long time pnpm user that also uses bun nowadays. Not much faster other than initial startup because pnpm uses Node.js

          Although pnpm has also been trying to rewrite Rust before, they call it pacquet. It is currently being revisited

    • jesse_dot_id 8 hours ago
      Not mature enough for everyone to be using it yet, but it may dominate the space down the line. They compete with Deno.
    • nonameiguess 5 hours ago
      I've never done any JavaScript development of any kind and had never heard of this either. I thought it was a package manager at first, but apparently it's an entire runtime.

      My question is, if it's this trivial to rewrite Zig to Rust, and trivial in general to write Rust at all, why not just use Rust for your server side code in the first place? What's the value of continuing to use JavaScript and putting so much effort into the runtime?

    • yoyohello13 8 hours ago
      Bun has a lot of buzz as 'the next big thing' in the JS ecosystem, and was recently purchased by Anthropic. So it's kind of in the zeitgeist.
    • reducesuffering 4 hours ago
      >Is this a big deal? Who is even using this software? Why is this big?

      Let's see. $10T in market cap, a significant chunk of everyone's assets and retirement funds, are currently dedicated to AI build out because of the potential for AI like Claude Code, which is recently doing $3b in revenue, and built completely on Bun.

      If Bun is able to successfully vibe code a complete language shift in this short of time, it much more concretely validates the potential of vibe coding / AI for the entire industry.

  • simonklee 15 hours ago
    Say what you want, but for people building products on Bun, this is bad news for the foreseeable future.
    • arealaccount 15 hours ago
      I guess it’s time to have Claude rewrite my Bun app in Deno
      • sesm 9 hours ago
        I'm sorry, Dave. I'm afraid I can't do that.
      • KronisLV 6 hours ago
        Hear me out, what if we rewrote Deno in Zig?
      • avithedev 14 hours ago
        This made me laugh loud
      • bflesch 8 hours ago
        That's a great idea! Once you've granted access to your private repository I can do that.
    • dolmen 13 hours ago
      I guess that the next release of Claude Code will use that runtime.

      No later than next week.

    • preommr 14 hours ago
      I am genuinely speechless.

      I don't understand the rationale behind how any project, especially of this magnitude, can seriously build something stable this way.

      My consolation - and it could be pure cope - is that at least I am in the same boat as a huge company like Anthropic, and they surely wouldn't be stupid enough to also build their cli tools around something that they saw as risky.

      feelsbadman.

    • slig 13 hours ago
      This is bad for anyone building on Zig.
      • dormento 11 hours ago
        Cue the clueless CEOs of zig shops (I don't know many, but still):

        "Rust is faster and safer! Port it! If you don't do it, I'll do it myself, because AI can do everything a programmer can, including the stuff you don't want to do. Ship it!"

        • allthetime 1 hour ago
          What serious zig shops exist are generally run by actual engineers. Check out tigerbeetle if you want a good example.
      • phplovesong 13 hours ago
        Why would it be? There is projects like Roc that did the opposite, they went from rust to zig, as they (had to) use lots of unsafe rust. And before you ask, no it was not an AI generated rewrite.
        • npn 12 hours ago
          that is the point. rewrite is fine when - you take your sweet time doing that - you still know full well the codebase

          that will ensure the new codebase can still be well understood and can continue to grow in foreseeable future

          or you can just vibe the whole experience if it is a legacy project with all the specs and edge cases known.

          since bun rewrite is neither of the case, it will be a crapfest soon enough.

  • frangonf 16 hours ago
    So the geniuses in the datacenter prefer to rewrite the full codebase in another language instead of maintaining and improving its own fork or contributing to make the current language better.

    Impressive to rewrite 1MLOC in a week yes, but this is more of a job of a million monkey programmers crammed in a datacenter than a bunch geniuses. And I would know, since I'm a monkey programmer who is in danger now... Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027...

    • sesm 9 hours ago
      > Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027

      Imagine you want to monopolize programming by pushing LLM as an obligatory middle-men. Then people who can program without LLMs are direct threat to your business plan. It's time for us to start hiding. I'm cosidering adding `co-authored by Claude Code` to my hand-written commits and running Claude in useless loops to mock API usage.

    • wiseowise 4 hours ago
      You seriously think any of them gives a shit about any of this? They're part of Anthropic now, making money is the only goal.
    • q3k 15 hours ago
      No matter how I look at this, it's churn for the sake of churn.

      Even if the translation was free and into ideal idiomatic Rust (and it's obviously not - it's now Zig with Rust syntax) then this would be churn for the sake of churn.

      At some project scale the language really stops being any limiting factor, and you're instead mostly dealing with working past past architectural decisions, integration of large changes, deep optimization, steering the codebase into alignment with project roadmaps and long-term goals, regression testing as features get introduced, maintenance of multiple release trains... Experienced software engineers mostly stop caring about simple things like the programming language choice at that point, because whatever issues come from that choice have already been resolved. What matters is stability, careful orchestration of large changes and a stable and comprehensive test suite.

      • kllrnohj 7 hours ago
        > At some project scale the language really stops being any limiting factor

        That's not entirely true. At a certain scale, some languages start becoming increasingly more of a factor. Memory issues in C/C++ codebases, for example. This is pretty well established at this point, which is why there's a push to move away from memory-unsafe languages. Which likely would include Zig, for better or worse.

        • q3k 6 hours ago
          I agree that new software should avoid memory unsafe languages, but I would disagree that rewriting existing projects in a memory safe language at all cost is a universally good idea.
          • hombre_fatal 5 hours ago
            But you just shifted the claim to "at all cost".

            What if there isn't much cost? What if the benefits outweigh the cost?

            • zx0r23 5 hours ago
              I mean... the token cost alone on this thing...
      • cmrdporcupine 58 minutes ago
        It's I think not churn for the sake of churn. It's likely encouraged by the fact that Zig itself will not accept AI written code contributions.

        So now imagine your company and project -- written in Zig -- has just been acquired by the world's biggest/second-biggest AI company.

        That company's most successful and popular tool is running on your platform that is written Zig.

        And Zig maintainers want nothing to do with you.

        What kind of pressures, real or imagined, do you think that puts on the developers of Bun?

        Honestly, from what I've seen from a distance, actual rigorous software engineering doesn't happen at Anthropic. From what we saw of the Claude Code source, the reliability issues over the last few months, and now this. It's just a bunch of people getting high on their own supply falling all over each other. Quality issues galore and a delirious frenzy.

        FWIW I don't think it's intrinsic to AI. Codex is very well written (in Rust, BTW), fast, and consistent.

      • nefasti 14 hours ago
        The "idiomatic Rust" thing rubs me the wrong way. If someone writes Rust that compiles and works, that's Rust. full stop. Telling people it doesn't count until it's "idiomatic" is just gatekeeping. It quietly says you're not a real Rust dev until you've put in years and absorbed all the unwritten rules, which shuts out exactly the people who are still learning. Everyone writes "non-idiomatic" code when they start. That's not a failure, that's how learning works. Even if being written by LLMs, the devs still will need to improve their knowledge to keep the codebase.
        • cogman10 10 hours ago
          I get the feeling, and shooting for idiomatic on a rewrite is definitely wrong.

          That being said, "idiomatic" is more just saying "clean and familiar". It's using the right language features in the right places.

          For example, you could write something like this

              fn add_double(a: f64, b: f64) -> f64 {
                return a + b;
              }
          
              fn add_float(a: f32, b: f32) -> f32 {
                return a + b;
              }
          
          But that's not idiomatic. Idiomatic would look something like this

              fn add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
                return a + b;
              }
          
          The benefit of the idiomatic approach is now you have a function which handles a bunch of types from u32, to f64 and it also handles custom types and traits which implement the add ops.

          The first method is what you might write if you were, for example, translating from C to Rust. It isn't idiomatic but it's easy to do.

          The other thing to realize is that compiler authors optimize for idiomatic. The more you do things in a strange fashion, the more likely you are to stumble over a way of writing code which isn't being looked at when the language team is looking at performance and compile time optimizations.

          There's nothing wrong with non-idiomatic code per say. However, part of learning a language is learning the idioms. It makes you better at that language.

        • eesmith 14 hours ago
          I beliebe q3k's comment should be read as "[even if it's acceptable to the most stringent of gatekeepers] then this would be churn for the sake of churn."'

          Not that only idiomatic Rust is appropriate.

        • IshKebab 13 hours ago
          Not really. Rust is designed to be written in a certain way. If you machine translate C into Rust you end up with a load of `unsafe` code that follows the C style but consequently doesn't get any of the benefits of being written in Rust.

          Imagine if you translated assembly to C++, but you just did it by putting everything in `asm("...")` calls. That's not idiomatic C++ and you wouldn't get any of the benefits of using C++.

          That said, the Rust code I skimmed actually did look surprisingly idiomatic. It wasn't full of `unsafe` like I would have expected.

    • TiredOfLife 11 hours ago
      > or contributing to make the current language better

      The people making Zig have said they don't want that.

      • frangonf 9 hours ago
        They also said that:

        > Code origin was not even a factor [0]

        > AI is entirely besides the point here. The changes in this Zig fork are not desirable to upstream for several reasons. [1]

        So my view here is that besides AI policies to filter low value contributions and "contributor poker" [2] to attract contributors vs just contributions, a well thought of genious implementation aligned with the Zig roadmap instead of the "hacky implementation for a flashy headline" [1] would have made the cut.

        But then again this entertaining drama will sadly get deprecated by mid 2027 as the datacenters will be churning out their own opusrust and clankzig.

        [0] https://news.ycombinator.com/item?id=48017255

        [1] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

        [2] https://kristoff.it/blog/contributor-poker-and-ai/

  • pl-nerd-9000 2 hours ago
    I just skimmed through the porting guide and based on the number of unsafe blocks, this looks like a fairly straight-forward mechanical translation.

    If that is the case, why didn't they just "vibe-code" a Zig->Rust translator and a small Rust/TS/JS/whatever script to orchestrate things. You don't even need pretty printing support because rustfmt exists.

    You'll save on a bunch of tokens, probably a lot of time/enegy, the process becomes auditable and (hopefully) deterministic, and if there's a mass bug in the translation, you only have to fix it in one spot.

    • mplanchard 1 hour ago
      Bun is owned by anthropic. They get infinite tokens, and anthropic gets a fluffy PR piece slash advertisement.
    • fg137 1 hour ago
      But... But... It's going to be harder for them to claim "AI did the rewrite"!
  • fold_left 9 hours ago
    Honest question, how many of the leaks and crashes can be attributed to zig the language vs possibly (maybe, we don't know) a loosey-goosey, slot machine approach to development heavily reliant on AI? Will the inherent leaks and crashes be fixed, purely by dint of porting to Rust?
  • deathanatos 13 minutes ago
    I feel like there's an iron triangle here, that involves "is vibe-coded", "is secure" and "accepts bugfixes".

    Like, you didn't review that 1M LoC. There's no way to have done so. If we're accepting slop-fest PRs, then nothing stops an attacker from burying a security bug in a slop-fest PR that then gets reviewed. And if I'm the attacker, I'm crafting that security hole to have subtle clues to the security AIs reading it as to why it's "correct" so that your AI review bot goes "oh, yeah, this logic works".

  • defen 11 hours ago
    If LLMs can achieve this level of task in 9 days, why do we even need Bun in the first place? Shouldn't we just write our apps in Rust and not even deal with JS?
    • dheatov 11 hours ago
      Why even rust at the first place? I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle. Instead of writing webpage, one can just write prompt for each page and serve it with claude.cgi
      • hstaab 10 hours ago
        Can a webpage run my factories?
        • shantnutiwari 10 hours ago
          Yes, it can. Just vibe code Claude to connect to your lithiography machine and voila! Claude will run your factories. Claude can even apply oil to your rusty machines if you choose the $1000/month package
      • morkalork 10 hours ago
        And if you inject information about the user into the context, everyone can have their own personalized version and we'll turn the internet into the tower of babel where no two people see or experience the same thing.
      • shantnutiwari 10 hours ago
        >> I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle.

        Or we could just rewrite everything in assembly, becauase thats fast. Well, Claude can do that. (/s ??)

  • aarjaneiro 11 hours ago
    So many of the code comments on the new port concern only discussion on how it was ported, usually referring reader to the original zig implementation.

    So now I'd basically be reading 2x the amount of comments and code to understand _why_ anything is happening.

  • dgellow 7 hours ago
    If the bun team is around I would be interested to get their opinion on this: in the old time migrating a 1M codebase from one language to another meant you would pretty much become an expert in the target language. The output of the work is team experience/knowledge + the actual rewrite. With that Bun rewrite do you feel that the Bun team learned something other than “Claude can rewrite a very large codebase in no time”, which is impressive in itself. Is the output only the rewrite, or did you learn something along the way? And how do you feel about your answer? Not a snark question, like a lot of others I’m myself trying to understand how I feel about how our profession is/has been changing.
    • randusername 6 hours ago
      I used to think software was inherently valuable.

      Then I decided that software is of limited value without a team to maintain it. Not necessarily because they fix it, but because they represent a bunch of humans who collectively understand it and therefore give it more possibilities.

      And now this. I'm not sure what to make of it.

  • ryanschaefer 10 hours ago
    I think one of the things I had forgotten about but sheds some more light in my mind about how this was done is that anthropic bought bun.

    The change of tone with the author in the capabilities of Claude. The strategy of merging everything at once instead of a more slow, careful cutover. The “single” author story that every company loves to put forth.

  • 9999gold 15 hours ago
    Wondering what they will do when rust rejects a pr from them.
    • sesm 9 hours ago
      I guess they vibe-rewrite to C, relying on CCC compiler. Agent loop will be modifying both the project and the compiler until the ends meet.
  • LucidLynx 8 hours ago
    Is this really the state of "software engineering" today? :/
    • wiseowise 4 hours ago
      You'll pay for tokens and you'll be happy.
    • gmueckl 7 hours ago
      That's what the new AI overlords want the world to believe, at least.
    • registeredcorn 7 hours ago
      mushware*
  • RobLach 56 minutes ago
    Rust needs to remove the unsafe keyword to finally fulfill it's destiny as a practical LLM generation target.
  • christophilus 8 hours ago
    Well, that escalated quickly. I think I first heard rumors of this a week or two ago. That's a very vast turnaround for such massive code-churn. I don't know how to feel about this.
  • ibejoeb 8 hours ago
    Github is failing to load the 800 comments, naturally. I'll bet they're fun.
    • xigoi 7 hours ago
      Too bad modern computers are not capable of processing 800 paragraphs of text. That’s several hundred kilobytes! Maybe the technology will advance thanks to AI…
    • Imustaskforhelp 8 hours ago
      Github actually made my computer lag when there were no comments at all because of the 1 million lines of code added iirc. I could've responded something first but well I wanted to say something meaningful and didn't have anything so I just closed it.

      I had to literally force quit my browser because of how much it lagged iirc.

    • pixelesque 7 hours ago
      6,755 commits for the PR as well...
  • zapnuk 14 hours ago
    We should be greatful for this. This is the one public case study on how large-scale llm-driven code generation actually works out.

    With node and deno there are reasonable alternatives for everyone who don't want to use bun anymore.

    • wiseowise 4 hours ago
      > This is the one public case study on how large-scale llm-driven code generation actually works out.

      Is it, really? I can't imagine how much money in tokens was spent to get something like this + Jarred's and the teams salaries to review/manage this.

    • davemp 12 hours ago
      It’s not a public study though. We’re not going to get trust worthy numbers about labor or token cost.
    • happytoexplain 12 hours ago
      The problem is that many negative effects of this kind of thing won't be clear or immediate, so it's not an easy test to make useful. At minimum, this increases the opacity of the box, reducing perceived trustworthiness.
  • TheMiddleMan 7 hours ago
    This may be the largest AI-generated codebase right now, by a lot. It'll be interesting to see how this plays out.

    Frontier AI software development still falls short in the design/architecture department, in my recent experience. Though it's pretty impressive at making "working" code.

    This being a fairly direct conversion from one language to another, even keeping the same interfaces across files, means the architecture is already in place.

    The detailed test coverage is also very helpful for Claude. But even detailed testing can't cover every edge case.

    So my questions are: How well did Claude do on the edge cases? And how maintainable will this codebase be going forward?

    • KronisLV 6 hours ago
      > This may be the largest AI-generated codebase right now, by a lot.

      I'm sure there's lots of other large scale applications of AI, just not many/any projects that are open source and so high profile - with the changes being done so far.

      Personally, in the past 3 months I've shipped about 2.3M lines of a legacy project migration, though the new codebase is Java + Oracle ADF because of reasons™ and instead of being an interesting codebase, it's more forms heavy and essentially acts as a front end for a large Oracle instance, think more CRUD than application runtime (with an upsetting amount of XML).

      The difference also is that it wasn't migrated by using AI on every file, but rather dumped the DB schema into JSON, and converted the old form contents to a YAML intermediate format that describes what's in the forms and have been iterating ever since of creating code that generates code - basically AI assisted development of a codegen solution + AI assisted sidecars that get merged with the generated code based on markers, when something can't be automated that way and often times also AI controlled browser based testing (since Playwright is in the cards for everything, but not yet).

      Seems to be going pretty okay so far, will probably take months more of iteration and fixes, currently the automated testing is taking a while because let me tell you - not only Oracle ADF is shit, but so is WebLogic, like fuck I'd be so closer to being done if I was allowed to pick Python + HTMX or even Java + Thymeleaf. That's still better than a team spending a year on the migration and getting like 10% of the way there.

      Obviously there's no more details to publicly share, but the overall vibe is clear: as long as you can test any changes, you can iterate faster than without AI - and the code ends up being more readable that colleagues would often write. The problem is that people would squint at the suggestion of 100% test coverage previously so most code is even written in a way that is straight up not testable (and often nothing is decoupled from the framework properly and tests take way too long, both time and resources).

  • KronisLV 6 hours ago
    That's pretty... brave? Not releasing it in parallel and spending a few months testing it against the old mainline version to surface issues BEFORE a potential merge?
    • ukblewis 5 hours ago
      Who knows what their release strategy will be. This is still only a canary release. Don’t put your horse before your cart
  • TeriyakiBomb 16 hours ago
    I hope the Deno lot take the opportunity to capitalise on this
    • tuananh 13 hours ago
      This is their chance for sure but it seems they are scaling down, at least their main product Deno Deploy.

      Prev they have presence in 31 regions but now it's down to just 6

      https://docs.deno.com/deploy/classic/regions/

    • veidr 13 hours ago
      By having Codex port Deno to Zig, you mean?
  • makotech221 8 hours ago
    first major company to really nuke their main product via AI psychosis?
    • ezekg 5 hours ago
    • supern0va 3 hours ago
      I for one think it's a fascinating experiment to see how well it goes. Though if it actually works and leads to bun getting better over the coming months, I suspect the arguments against it will just take on a different flavor.
      • Tesl 2 hours ago
        Of course they will, the goalposts will keep shifting because people don't want to admit that agents are now this capable.
  • WhyNotHugo 3 hours ago
    One of Bun's longstanding issues was that bootstrapping Bun required Bun, so distributions were unusable to ship it or anything that depended on it: https://github.com/oven-sh/bun/issues/6887

    Any ideas if this is now changing and Bun can be bootstrapped with "just" Rust?

  • linkregister 8 hours ago
  • lackoftactics 2 hours ago
    With weird sadness I have to say, we are getting targeted with new kind of marketing. It doesn’t look like it was just technical decision. If anyone was following what was going on X, it was crazy with amount of content about it.

    I couldn’t believe before with all fearmongering being marketing, but I am coming to conclusion it is. It’s hard to get any signal over noise in attention economy. They know what they are doing and it’s Deja Vu of crypto, but now we are targets with rage baits, guerilla marketing, buzz

  • ivanjermakov 5 hours ago
    For those looking for an alternative no-compilation TypeScript runner, I'm quite satisfied with TSX: https://github.com/privatenumber/tsx

    Node.js itself is getting quite close to running TypeScript natively, but they don't support using ES imports of CJS packages and importing with no-extension qualifier.

  • sudb 8 hours ago
    If this means that segfaults become rarer with Bun I might consider using it in production again. As it stands, Bun has been great as an all-in-one TS/JS package manager, build system and test runner but unstable enough that I still want Node running in production backends.
    • sisve 7 hours ago
      Yes. That is the plan.

      See jared comment [0]

      If this helps bun and rust is a better lang for developing bun going forward with the help of claude. Then i think that is just fine.

      I thought rust was making the codebase complex so zig won on speed and dx.

      But with llm and a large codebase it seems like rust gives fewer bug and you can develop it faster & safer.

      https://news.ycombinator.com/reply?id=48133519&goto=threads%...

    • xigoi 7 hours ago
      Surely there are no bugs in the 1000000 lines of code that no one has reviewed…
  • pronik 8 hours ago
    By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake. You are also not allowed to move from the PoC phase to lets-do-it phase within a couple of days without being called names. Why are we concerned with speed all of a sudden? Are we in the "people will literally die if a car moved faster than 25 mph" era of software engineering? Let them do whatever they want, they've shown the will to move on from wrong decisions, they will do it again if the Rust port fails to deliver and the whole industry gets to learn from it, whatever "it" might become.
    • solid_fuel 4 hours ago
      I can't ignore how much this sounds like Stockton Rush.

      > "Apparently if you build a submersible with carbon fiber you are a witch and need to be burned on a stake. But look we're making reliable trips down to the Titanic with no problems."

      Realistically, this is a forum of experienced engineers watching a company make some extremely questionable but very flashy engineering decisions. There's going to be a lot of people standing around here going "gee I dunno, that seems questionable".

      Personally, I think the rewrite will largely work - logically, direct translations from one language to another are pretty well within the realm of the few things LLMs should perform extremely well at. But I also think more information will come out showing this was much more bespoke than just prompting an agent to do the translation. This just feels too much like an ad for Anthropic, I think it's likely there was a lot more human involvement and planning than we are being told.

    • ethanrutherford 8 hours ago
      That you're only just "learning" that these things are true is a damning admission. And to fix your bad analogy, it's more like "hey maybe we shouldn't be allowing f1 street races through school zones".
      • shaewest 5 hours ago
        That analogy might work if this situation is 'reckless behaviour risking children's safety' but in this case it's much closer to 'We made an large, potentially risky change that you can choose to avoid until it's more mature'
        • wiseowise 4 hours ago
          The analogy is just bad to begin with.

          It's more like "we've switched ingredients while actively denying that they'll be switched".

          • shaewest 1 hour ago
            They never denied they'd switch, just that they'd need solid improvements confirmed before they switched. Clearly internally they've decided they've seen the gains necessary to carry on with the switch
      • ukblewis 5 hours ago
        This is silly IMHO. They haven’t released a new official Bun version with this code yet. It is a canary release. Give them a chance to figure it out and try it out and see how the limited number of production users of bun as a runtime experience the move. If it succeeds, this will massively accelerate development and they will have much to teach us all about how to safely code 1M lines with AI and merge it in days. If it fails, we will know that AI isn’t ready for that yet
    • wiseowise 4 hours ago
      > By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake.

      You've just learned that you can't do random shit and not get called out? Were you born yesterday?

    • happytoexplain 8 hours ago
      The AI polarization is making me sick. Please don't let this style of comment become normalized on HN (and that includes equivalently tribalistic anti-AI comments).
    • zx0r23 5 hours ago
      Anyone running bun in production right now has to be sweating lol, this is a ridiculous change for a part of your software stack that really ought to be reliable.
    • tokioyoyo 4 hours ago
      Heavy implications on how the future will be formed if things go well with this port. It would prove a lot of people wrong if things go well 3 months down the road.
      • happytoexplain 4 hours ago
        Not really - three months is nowhere near long enough to demonstrate if a large piece of software has issues or not.
        • tokioyoyo 4 hours ago
          With the amount of applications running on Bun? I’d say enough.
          • ncruces 4 hours ago
            You think they'll ~all merrily move to the new version?
            • tokioyoyo 3 hours ago
              Doesn’t have to. It’s a big bet, with a huge payout for Anthropic.
    • andrepd 8 hours ago
      The top comment in the thread explains it pretty well, so please don't pretend it's anything else. The point is they went from "chillax, it's just an experiment" to "we'll switch languages via a 1M line vibecoded patch" in two days. People that rely on this software are understandably fearful, since there is no way this change has been properly revised and tested. Although perhaps the mistake was relying on such software in the first place... And so are contributors too, which have seen essentially the entire codebase replaced in a week.
      • shaewest 5 hours ago
        People relying on this software can absolutely choose to stay on current/recent versions until this becomes more mature. My assumption is that the current state allows for public testing, but anyone needing a stable version wouldn't be affected and can choose to not be affected by it.
      • 0x457 7 hours ago
        Why "no way"? You're also forgetting extensive test suite?

        Merging it so quickly only odd if you're planning on retaining current community.

        It's not like it was merged and shipped to every single stable distro overnight. That's how things get tested.

  • dom96 2 hours ago
    I'm curious where this leaves Zig. Bun was the most prominent and biggest project using it. What's left?
    • peesem 2 hours ago
      TigerBeetle (https://github.com/tigerbeetle/tigerbeetle) and Ghostty (https://github.com/ghostty-org/ghostty) come to mind as decently popular projects.
    • allthetime 1 hour ago
      Zig is still a moving target with big fundamental changes being made to the language from version to version - nowhere near v1. When rust was at this stage of its development you wouldn’t have been able to name many projects either.
    • chillfox 1 hour ago
      I though TigerBeetle was the biggest Zig project. Anyway, I am sure there's plenty of projects in Zig out there.
    • sergiotapia 1 hour ago
      It leaves it in the same vibe realm as Nim. A terrific language but probably never hitting mainstream. You're familiar with Nim. ;)
    • mattrighetti 2 hours ago
      Ghostty
  • tkel 17 hours ago
    Turns out "its just an experiment, you all are overreacting" was just a lie to damp criticism.

    https://news.ycombinator.com/item?id=48019226

    • worble 16 hours ago
      Merging a complete rewrite in another language in 9 days seems insane to me. Maybe I'm just too cautious but with something like this I'd split off as a separate binary and get some heavy use customers involved as testers first to see if it causes any unforeseen problems before slowly expanding it out.

      I'd want to be pretty damn confident it won't cause any regressions before sunsetting the original codebase in favor of this one.

      • goyozi 16 hours ago
        I don’t think you’re too cautious. Big upgrades and rewrites is somewhat of a „work hobby” of mine and this seems waaay too fast. I don’t know how the Bun canary process works and I guess their test suite is better than typical projects but still… I can’t imagine this working out well without testing it on a variety of big projects for a significant amount of time.

        There’s probably loads(?) of observable behaviors that people rely on, consciously or not. Even _if_ the new thing is 100% spec compliant, it might still be breaking or otherwise problematic for heavy users.

        That said, I’d love to be proven wrong. I use Bun from time to time on small stuff and I enjoy it, so I wish them well (:

      • progbits 16 hours ago
        > too cautious

        No, you are perfectly normal.

        The people who in one week decided to replace the whole codebase for a widely used tool with code no human has seen are the crazy ones.

      • borngraced 11 hours ago
        Testing in production xD
      • progx 9 hours ago
        9 days is the official story. Nobody knows, how long they really work on.
    • preommr 14 hours ago
      Well I've got egg on my face.

      I am in that post, defending bun.

      I thought for sure the peanut gallery was overreacting. Especially when the concern was absurd - because who would do such an insance thing? Like, at the time I legitimately thought 'no way a project switches over in a few months'. Even as an absurd hypothetical, I couldn't even imagine the prospect of it being done in a matter of days.

      Feeling really confused right now.

      • rglover 31 minutes ago
        > Well I've got egg on my face.

        Not at all. Supporting a methodical conversion to Rust seems reasonable. How could you have predicted they'd shotgun it?

      • ulbu 13 hours ago
        that’s the advertisement part of this ordeal you’re experiencing.
    • franciscop 16 hours ago
      It seems it was an experiment at that moment, and that it went well? I do hope they release it under 2.x though, cannot imagine how a 1M LoC can break in so many ways, especially if what xiphias says is true:

      https://news.ycombinator.com/item?id=48132902

      • camel-cdr 14 hours ago
        If I got magically handed the perfect rust rewrite for a project of this magnitude, it would take way longer than 9 days to merge, because I would need to make sure it's actually good.
        • overfeed 6 hours ago
          > it would take way longer than 9 days to merge, because I would need to make sure it's actually good

          What if another (unstated) goal of your rewrite was to provide marketing material for how advanced your acquirers AI tools are? The faster the turnaround, the better they (and therefore you) look.

      • latexr 16 hours ago
        > It seems it was an experiment at that moment, and that it went well?

        There’s no way they can know that for sure. A change of this magnitude cannot go from experiment to success in such a short time frame. Even if all the code were 100% correct, you can’t call it a success until it’s battle tested in real world scenarios for a while, and that is impossible without time. Same way you can’t cook properly by throwing food into a vulcano. It’s not just about the temperature.

        Either the “experiment” claim was a lie or they are being irresponsible.

    • pier25 12 hours ago
      Maybe Anthropic decided to push this because of all the attention the experiment got.

      If it works out it’ll be a good study case for marketing.

    • keyle 16 hours ago
      I'm no believer... 9 days later... Lessssssgoooooooo wooooooooo <sunglasses and rave>
    • randypewick 16 hours ago
      The experiment might have turned out well, or the author might have spent enough time to bring it to a place they was comfortable.

      Frustration moves mountains, I don't think this rewrite was done lightly.

      • ajyoon 9 hours ago
        The rewrite was obviously done lightly.
    • veidr 11 hours ago
      You have no idea if it was a lie or not. I routinely have my clanker fleet spend a couple days toiling on some crap that I assume I will throw away, but it turns out pretty awesome, so I keep it.

      It's entirely plausible that when that comment was posted, he doubted it would work well enough to keep.

      (Sensible default for LLM code, btw. But sometimes it works great.)

    • impulser_ 17 hours ago
      "We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely."
      • jen20 16 hours ago
        People conflate “high chance of X” with “X will happen” all the time. See elections, for example.
        • ajyoon 9 hours ago
          The phrasing strongly implies that they are taking the migration seriously and carefully. Merging straight to canary after 9 days is insane.
        • 0x457 4 hours ago
          I have a friend who get super mad when he fails ">80% chance of success" throws.

          This isn't case of this tho. Even he said that there is a high chance of RIIR, 9 days still insanely short time for such rewrite if you're planning to have some sort of community around the project.

        • wiseowise 3 hours ago
          We all have eyes, it doesn't take a genius to spot a lie.
    • tclancy 15 hours ago
      > was just a lie to damp criticism.

      Citation needed. Couldn't it just as easily have been one person being as suspicious of the task as everyone else seemed to be?

    • potsandpans 10 hours ago
      Surely the mods will be here to remind you that it's against the rules to direct personal attacks towards other community members, to fulminate and brigade.

      Or do those protections only cover whiny open source developers upset about a chat bot writing blogs?

    • mapcars 16 hours ago
      Well it was 9 days ago, at the time they were not confident, but maybe the results were insanely good.
      • rk06 13 hours ago
        no matter how good the results are, this kind of rewrites deserves an experimental build to be battle tested by bleeding edge users.

        It takes a lot of rigorous testing automated and manual and by community before such changes are cosnidered permanent.

        One does not simply YOLO a full langugae rewrite without user feedback. it is insane.

        • mapcars 5 hours ago
          >One does not simply YOLO a full langugae rewrite without user feedback. it is insane.

          The whole ai thing today is pretty insane, I would say. Why not ride with it, especially if your company is one of the biggest leaders?

        • Tadpole9181 11 hours ago
          You should really read TFA because... that's exactly what they're doing?

          The Zig version has not been removed and this only exists got canary builds. No rust binaries are being distributed as stable.

          • 12_throw_away 7 hours ago
            But the official canary/bleeding edge/nightly/whatever version is now the LLM rewrite, yes?
          • rk06 11 hours ago
            The page is not loading for me.
    • skeledrew 14 hours ago
      Does anything from that comment say that there was 0% chance the experiment wouldn't be merged into main? I see "very high chance all this code gets thrown out completely", which just means the low chance of it not being thrown out has occurred.
      • mapmeld 13 hours ago
        It doesn't say what will happen, but isn't their comment responding to people who don't like the look of this rewrite, and telling them basically that they don't have to think/worry about it? I definitely read it as 'not yet' and not 'another week or so'.
  • ChocolateGod 3 hours ago
    Hopefully this means Bun can now support things that were limitations of the Zig libraries like being able to upgrade standard TCP sockets to tcp without closing them.
  • nDRDY 16 hours ago
    Why didn't they ask Claude to remove all of the `unsafe` at the same time??
    • dolmen 13 hours ago
      "at the same time" is a recipe for failure with coding agents.
      • veidr 12 hours ago
        It's also a recipe for failure for ports in general. Same goes for the "not idiomatic Rust" comments above — that would be nonsense.

        You want to port it as faithfully as possible to the original, porting it bug-for-bug, quirk-for-quirk. Then, over time, after the port has been proven to be as identical to the original as possible, you can gradually fix those kinds of internals.

        That's why TypeScript's tsgo native port is so good.

        • nDRDY 12 hours ago
          tsgo will inherit many benefits from go, even if it is never fully "idiomatic".

          This is in direct contrast to this port, which requires significant re-architecting (or made "idiomatic", if you wish) in rust to achieve any of the benefits of the language. You can't re-architect one step at a time.

          • veidr 12 hours ago
            I don't think you want to achieve any benefits of Rust in the initial port. Because at this scale you will definitely introduce new, and probably subtle, bugs that are not present in the Zig version.

            You just want it to be the same, to the maximum extent the language allows. E.g. 1000+ unsafe is the right move, for now.

            Reaping the benefits of Rust is for _future_ development.

            • nDRDY 11 hours ago
              That's my point - I don't see any hope of removing the 10,000+ unsafe calls, especially not one step at a time.

              As such, this is a publicity stunt.

              • veidr 10 hours ago
                You could do, but maybe they never will. I have no idea.

                But the point is, in 2027, 2028... your new code doesn't have to suffer from these frankly 1970s issues

                You could also gradually fix the internals — if you wanted to

                • fdsajfkldsfklds 8 hours ago
                  The irony being that machine-translation of code language also dates from the 1970's.
      • nDRDY 13 hours ago
        Right, so what we have here is a very expensive regex.
        • surajrmal 12 hours ago
          It sounds like some bugs were fixed in order to make it compile.
  • Havoc 14 hours ago
    How does the no async work? Would have thought Bun would need that
    • valzam 6 hours ago
      Async presumably happens in the JS runtime that bun calls into. Just need 1 thread to host that
    • lukaslalinsky 8 hours ago
      People were doing async I/O before coroutines existed. They are using callbacks and their own networking.
  • ninjahawk1 16 hours ago
    “+1,000,000” changes in a single commit is insane.
    • ahepp 16 hours ago
      The really interesting thing to do would be to ask the agents to submit the diff as a coherent patchset...
    • dluan 16 hours ago
      > "The codebase is otherwise largely the same. The same architecture, the same data structures."
    • paulddraper 16 hours ago
      And 6700 commits.
      • eqvinox 16 hours ago
        No wonder GitHub is down

        /s

        • yashau 13 hours ago
          That OpenClaw guy seems to make 6000 commits everyday or something.
        • paulddraper 12 hours ago
          No /s needed
  • sesm 7 hours ago
    I wonder, did they consider an approach of vibe-coding a deterministic converter and then running it? This should be much more token efficient.
  • janice1999 15 hours ago
    Has he estimated the token cost for this (if he had to pay that is)? I'm curious how much this would cost a paying customer.
    • dolmen 13 hours ago
      Bun is owned by Anthropic.

      This is just marketing budget.

      • tuananh 13 hours ago
        The acquisition money is coming from marketing budget :D
    • phplovesong 13 hours ago
      Probably in the six figures.
      • zipy124 4 hours ago
        Depending on the model I could easily see it approaching 7 figures since Mythos security scans have been 6 figures already and don't require nearly as much output.
  • qustrolabe 5 hours ago
    It's cool how you can just do this now in 2026. I hope it gets cheaper and easier to do with other big projects written in outdated or just not good enough languages
  • atonse 8 hours ago
    I have full faith, it's the same really smart people that built bun (Jarred and team) that have spearheaded this and are running it. So I have no reason to believe that this was done carelessly.

    That said, I'm still shocked and amazed that something this big is possible these days. But as we've seen multiple times now, one of the most important things your codebase can have is a solid test suite.

    I will continue to use bun, because at the end of the day, it isn't just the technology, but the talent/people behind the technology that ensures that it will be solid.

    And since that hasn't changed, I will still trust bun and its direction.

    Also, bun is mostly glue code and sort of "user space" libraries (my words) as Jarred has said on X, most of the underlying runtimes like JavascriptCore, etc weren't rewritten.

    So this isn't like 100% of what we think of as bun was rewritten. It's more like the scaffolding and harness.

    • ifwinterco 8 hours ago
      Just because it's possible, doesn't mean that it's sensible
    • bigstrat2003 7 hours ago
      > So I have no reason to believe that this was done carelessly.

      Writing software with an LLM is doing it carelessly.

    • zx0r23 5 hours ago
      Doesn't doing this in the matter of a week or so, by definition mean it was done carelessly?

      How could it be possible to test such a complicated piece of software, and review such a large amount of code in such a small timeframe? Spoiler, it's not. They're merging slop.

    • moomoo11 8 hours ago
      yeah but it also made some tests pass by changing the tests. i’m not super familiar so i’ll dig more on weekend but it seems sus pending more review. i’ve had ai do similar things that i caught in manual review. cheating the test is bad.
      • gmueckl 7 hours ago
        It is welk known that agents can cheat or go off on tangents and not recover. Just recently deleted a bunch of code files that I didn't ask for. The code wasn't even used anywhere.
      • atonse 7 hours ago
        That's why they've merged it into canary so they can continue working on it.
      • tuo-lei 7 hours ago
        [dead]
  • ivanjermakov 5 hours ago
    I hope it's obvious why I'm removing Bun dependency in all my projects. Would be great to have a non-affiliated zig-bun fork that focuses on, well, runtime.
  • simpsond 6 hours ago
    This is a wild experiment! I do think the incentives are heavily weighted to Anthropic for this to go well. I have mixed feelings about how it will go, but it will result in an important outcome…
  • electronsoup 6 hours ago
    So how many of their employees are now familiar with the codebase? zero?
  • tvidas 12 hours ago
    I don't really understand the point of this. Is it Anthropic showing off well their LLMs work? Was it too difficult to find Zig devs so Bun swapped to Rust? Did Jarred read one too many memes about "rewriting in rust" and took it at face value??

    I would imagine that there will be bugs migrating all at once, performance will probably be close to the same, and the maintainers will need to context shift from Zig to Rust. A very confusing decision for sure.

    • allthetime 57 minutes ago
      Claude is significantly better at rust than zig. Zig is changing all the time. If you check my profile comments I did a quick experiment recently to demonstrate. Essentially, Claude could generate a basic working tcp echo server in a few seconds. For zig, either asking it to do it just with zig, or with specific versions (.15 and .16 because some fundamental language changes necessitate different implementations) failed to produce working code in all three cases and also took magnitudes longer to generate the code.

      Aside from the big marketing play, Claude not being able to easily generate zig code was probably a big motivator - it doesn’t make anthropic look good and it doesn’t fit into how they’re doing things

      Also, you’re assuming that actual traditional maintainers even exist now. Likely it’s a smaller team of people running mythos agents with an unlimited budget and no real need to fully understand the code

    • nDRDY 11 hours ago
      I suspect one part of the puzzle is that Bun used its own fork of Zig, that had diverged signficantly in design and direction from mainline Zig.
    • J_Shelby_J 8 hours ago
      The point of it is to hype anthropics IPO.
    • dcchambers 12 hours ago
      Probably some combination of: Anthropic is heavily invested in the Rust ecosystem and they want their core tools to be built on Rust. More Rust developers. More Rust training data so LLMs write better Rust code than Zig code. Advertisement for Claude Code doing major work on a high profile open source project.
  • elwesties 7 hours ago
    This is so awesome! What a time to be alive that something like this is possible.
  • feverzsj 16 hours ago
    How they gonna do refactoring, bugfix or other maintenance on generated code? Ask LLM?
  • j-pb 12 hours ago
    On one hand I kinda feel validated for having jumped ship on Zig 3+ Years ago[1] and moving everything to Rust[2], with the language simply being too unstable and unsafe in my eyes, despite my love for comptime and people arguing that Bun and Tigerbeetle were proof that it wasn't the languages fault.

    But I also feel bad for the Zig project to loose one of their flagship projects, because while I find the project ultimately anachronistic, I know what it's like to pour your sweat, heart and soul into something, and having it replaced within a week is a sobering experience even from afar.

    A couple years ago this would have been unthinkable because of how slow legacy codebases and rewrites are.

    I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone. And I wonder if they will follow suit eventually simply due to marketing pressure (after having been bitten by the Zig compiler I was surprised that they were putting their super duper high reliability database on top of it at all, but with another big player using it there was at least some peace of mind for their enterprise customers).

    1: https://github.com/triblespace/tribles-zig

    2: https://github.com/triblespace/triblespace-rs

    • jorangreef 5 hours ago
      > I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone.

      In general, we never like to appeal to popularity (a logical fallacy), but why would you assume here that we would point to Bun specifically (or any project for that matter) [1] as an example of Zig’s quality?

      We prefer to judge Zig’s quality on its own intrinsic merit:

      For example, we subject the language through TigerBeetle to inordinate amounts of fuzzing, perhaps more than any other language (you could say Zig is lucky to have TB’s test suite aimed against it!).

      Literally 1,024 dedicated CPU cores, 24/7.

      Zig holds up remarkably well.

      We also recently pledged $512K to the ZSF, together with Synadia.

      These are the kinds of things we prefer to point to. Not hype, but real end-to-end systems engineering, and long term financial support, regardless of the language we choose to use.

      [1] I picked Zig back in July 2020. At the time, the largest project was River, but already Zig was a phenomenal choice, and the years have only shown that Zig was probably one of the best design decisions in the development of TigerBeetle. It turned out better than I imagined.

      • j-pb 4 hours ago
        Correct me if I'm wrong, but the three largest Zig project (by far, with a huge gap between them and the rest of the pack) are Bun, Ghostty, and TigerBeetle.

        A language so niche that it only has 3 major projects is a liability. Now it has 2 major projects, one of which is yours. Even I as a weird language connoisseur would raise an eyebrow at that.

        After switching from Zig to Rust, I felt like the language was helping me improve the correctness of my project, to argue that the fuzzing of your project helps improve the correctness of the language feels backwards and adds to my suspicions.

        We both know that fuzzing is great, but that wether you fuzz with 1000 cores or 1.000.000 cores, at an exponentially growing state space it doesn't make (that much of a) difference (I know that you guys are not doing naive fuzzing, which is extremely cool, but the shape of the problem is still O of evil shaped). Most things you can find with fuzzing are shallow-ish, and if you want to go deeper you need formal verification (for which a strong type system is a good first approximation and I'm not aware of something like Kani in Zig).

        I like TigerBeetle and I still wish you guys all the success in the world, but I can't help and wonder where you could be by now if your language was lifting you up, instead of you having to lift up your language.

    • jrpelkonen 7 hours ago
      While I don’t have personal experience with either project, I feel it is safe to say that Bun and TigerBeetle are not comparable projects: TigerBeetle has a strong focus on testing and correctness, and Bun maybe not so much. IIRC, TB did well in the Jepsen test and had one segfault in a client library. Bun has had quite a few memory safety issues, in fact, the stated motivation for the Rust move is to eliminate those going forward. We shall see how that pans out.
    • nDRDY 11 hours ago
      I doubt the Zig maintainers will miss the giant PRs from Bun!
      • j-pb 11 hours ago
        I'm pretty sure they'll miss the full developer salary that Oven used to sponsor them, which they no longer do. I'd wager one doesn't do a rewrite like that, if you are in great personal standing with the language foundation.

        That same "just don't use it" attitude was what drove me away from Zig btw. I would have been fine in restricting myself to a somewhat stable subset, e.g. if, loop + function calls, but they didn't want to provide any tiered stability guarantees for the language.

        Opinionated is great, no local minima is great, but you have to accept that if you don't want to engage with the needs of your (professional) community then what you do is a hobby project. A very cool hobby project beloved by thousands, but a hobby project.

        • AlienRobot 8 hours ago
          I think if you use a programming language that is clearly version zero you can't complain that it's not stable...
          • j-pb 8 hours ago
            I'm not expecting the whole language to be stable, but I expect certain parts of it to be more stable than others. E.g. control flow vs. async.

            I'm not saying that they can't work that way, more power to them. But then having the expectation of anybody using it in a professional setting is also unrealistic. You can't have your cake and eat it too, either it's your personal project and you are fine with nobody using it but you, or you evangelise for people to use it, but then you also need to make at least some effort to not break their stuff on a whim, or to accept their change requests when they put in the work as was apparently the case for bun.

            Tbh I don't see Zig hit 1.0 with a meaningful user-base, it's probably going to mostly get eaten by Rust or some other language and will continue to exist as a niche thing, kinda like D.

            Having one of the flagship/showcase codebases rewritten to Rust in a week feels like a death knell. Either the community or the language is too unworkable if someone that heavily invested into it jumps ship, and I'm afraid it's kinda both.

            • AlienRobot 11 minutes ago
              Having tried both, I think Zig is a replacement for C, while Rust is a replacement for C++.

              One thing Zig has that lots of "niche" languages don't is that you can include C headers directly. This means if you want to make a game in SDL, for example, you don't need to wait until someone ports SDL to your new language. You can just include SDL.h directly and start using it. D also has this feature, by the way, but Rust requires you to generate the bindings.

              Even if people move from Zig to Rust for some things or vice-versa, the strengths of Zig remain there.

  • lucasloisp 16 hours ago
    The follow-up PR removing the zig source files being auto-tagged by bun's own CI as "ai slop" is so funny

    https://github.com/oven-sh/bun/pull/30680

  • HEX4AGON 15 hours ago
    I'm curious how much dollar in LLMs this rewrite cost
  • padjo 11 hours ago
    Will be interesting to see how this pans out. Some people will see minor issues as proof that AI is terrible, but honestly if this gets released and is relatively uneventful it just highlights how the art of building software had changed completely in the last few years.
  • q3k 16 hours ago

      $ grep --exclude-dir=.git -r 'unsafe {' | wc -l
      10465
    
    Nice.
    • K0nserv 15 hours ago
      It's not that weird to end up with this when translating C/Zig/C++ to Rust. A first pass can use unsafe and then when the code is in Rust you can work on reducing the unsafe.

      Trying to eliminate all unsafe as part of the rewrite, whether done by human or LLM, would be making too big of a change in the process of rewriting.

      • q3k 15 hours ago
        > would be making too big of a change in the process of rewriting

        God forbid the already unreviewable -710kloc/+1mloc change get any bigger!

        • K0nserv 15 hours ago
          Sure, but that's kind of orthogonal. Imagine doing this by hand I still think going like-for-like with the Zig, even if that means a lot of unsafe, is a good approach.

          But I suppose if you are already using LLMs it's more reasonable to try and go from Zig straight to Rust with no/minimal unsafe.

    • janice1999 15 hours ago
      The benefit of using Rust is that you know exactly where the unsafe code is so you can handle it explicitly and deliberately to avoid issues by imposing carefully crafted constraints... oh.
  • mghackerlady 4 hours ago
    This is kinda sad, I liked having bun as a good example of software in Zig
  • baq 5 hours ago
    I low key hope a codex shop, perhaps OpenAI themselves, do this too, so we can compare results.
  • gib444 7 hours ago
    Maybe a good advert for Claude; but a terrible, terrible advert for the stewardship and governance of the Bun project.
    • zx0r23 5 hours ago
      This is the most accurate take lol. Claude's done impressive work, but I would absolutely never trust this project in production now.
  • nesarkvechnep 7 hours ago
    The average quality of the Zig projects got up.
  • phplovesong 13 hours ago
    RIP Bun.

    Im feeling like i won the lottery that i picked deno over bun a few years ago for a bigger project.

  • serial_dev 7 hours ago
    I wonder if the whole acquisition was done so that they have guinea pigs that can’t say no…

    or if I want to be cynical… so that they have a big enough project where they can force gigantic rewrites without considering the outcome from the project’s point of view, all so that they can fuel their marketing strategy.

    To be honest, kind of obvious looking back.

  • 3asjUas 2 hours ago
    The result is so horrible that Anthropic will quietly move to Node in 6 months. Now they got their headlines and in 6 months everyone will have forgotten about it.
  • mapcars 16 hours ago
    >No async rust.

    I wonder why does that deserve an explicit statement? Is there anything wrong with async rust?

  • PudgePacket 17 hours ago
    +1,009,257 -4,024

    wild

    • andrepd 16 hours ago
      Least unstable js project
  • bharxhav 16 hours ago
    This canary will never leave the mine. (unless Anthropic opens their wallet again)
  • deadbabe 1 hour ago
    To me the interesting thing to watch about this project is that if it fails and Bun becomes a piece of shit even with all the resources at their disposal, it means LLMs are probably not going to be the revolutionary tech everyone has been hyping it up to be. It’s useful sure, but software engineers aren’t going away. How could anyone interpret this any other way?
  • tuananh 13 hours ago
    For those daring to put this in production: you're crazy!
  • poops 11 hours ago
    1 million additions. 4k deletions. 0 approvals.
  • Herbstluft 1 hour ago
    I mean aside from the somewhat...dishonest statements from the people involved, giving false explanations is one thing, but calling people who smelled this "overreacting" gives this a weird taste.

    I am neutral on such a rewrite itself, there are pros and cons to the whole "rewrite in Rust" topic. People are making decent arguments. But the way the initiator here reacted makes it seem like the Bun team itself thinks they are doing something weird here...

    Guess reviewing any code isn't exactly their thing either anymore? And I guess adjusting the tests themselves is certainly one way to make things pass.

    Ultimately this just seems like it was done specifically to make Bun more "ai friendly". Whether it turns out good or not that appears to be the motivation behind it.

  • sutib 16 hours ago
    "And Icarus laughed as he fell, for he knew to fall means to once have soared"
  • classicposter 15 hours ago
    It's interesting that the developer who spearheaded the hype of Zig abandoned the engineering without addressing the segfault. They could have also taken the approach of gradually porting from Zig to Rust via FFI. Yes, this is a slop show by the AI lab.
  • youio 9 hours ago
    9 days to review +1million LOC in Rust is enough? wow..
  • rglover 5 hours ago
    Well this is uncomfy. Not what...a week ago this was just framed as an experiment and now it's being rammed through?

    Even if it works/is correct/etc, this is shockingly careless.

    If I'm going to be using your thing to build on top of, I sure as hell don't want to see you 180'ing a week after you just said you weren't going to do exactly what you just did.

    Hard pass, purely on principle.

  • tabs_or_spaces 7 hours ago
    Why would you replace an existing codebase like this instead of forking the repo instead and then making the changes?
    • tredre3 3 hours ago
      They did fork it initially to experiment, then decided this experiment would go forward and thus naturally belong in the main repo.

      Git has this branch concept. It's being used correctly here, IMHO.

  • ptrl600 15 hours ago
    Hey, it forgot to change the README!
  • hacker_88 5 hours ago
    Time to fork it for zig
  • 4b11b4 12 hours ago
    I can't imagine doing this to my own code base lol. I suppose only after Anthropic gave me a lot of money I'd say hey fuck it let's find out
  • sionisrecur 9 hours ago
    I wonder if projects like Ladybird will try this approach now. They've been trying to move to Rust (after trying Swift first) for a while.
  • keeganpoppen 6 hours ago
    i find it hilarious how desperate people are to cope that this can’t possibly work, must be horrible, etc. for all i know, it is. but let’s just see how well it works, rather than “no true scotsman” grouse about it. it is so sad. it reeks of “doth protest too much” energy. if it were so obvious that ai was insufficient to do the work, then i don’t think you’d have to circle the wagons about it. you could just confidently watch the market turn on the product and know the reason why. and all that would prove is just how special you all are that ai cannot replicate your genius. the reality is that foundation model makers have been dogfooding their own vibes for multiple years now, and it is clearly is good enough for _them_. but yeah, i’m sure that’s just a total fluke and they are all idiots. /eyeroll
  • jauntywundrkind 8 hours ago
    What does this mean for bun add-ons like opencode's opentui? Did FFI also somehow get ported or will that have to be updated? https://github.com/anomalyco/opentui
    • 0x457 4 hours ago
      First, why are you calling it "add-on". Second, it's done via the same C ABI.
      • jauntywundrkind 2 hours ago
        Node's been calling native code distributed in a npm package "add-ons" for a decade and a half.

        Fair call on the same C abi. Adapting to node 26.1.0's new FFI is happening in https://github.com/anomalyco/opentui/pull/104 . There's also some new FFI adapters opentui is adding there, and they're adding a worker.

        So there is some adaption. That was sort of the interesting useful actual look I thought might be informative, where-as I feel like you were mostly just trying to be curt & maintain a status quo of keeping us all uninformed/unknowing. Let's try actually providing useful steps forwards when we post, ok?

  • wateralien 16 hours ago
    Deno's approach from the beginning seems to have proven out.
  • minikomi 16 hours ago
    Now translate it into zig!
  • suck-my-spez 5 hours ago
    What a disaster
  • pier25 12 hours ago
    This will burn the little reputation and trust Bun has been able to achieve in the past couple of years.

    I guess this is what happens when you only have to respond to your corporate overlords.

    I will migrate my Bun projects in production to something else.

  • jwpapi 16 hours ago
    This will go down in history as the biggest mistake of software engineering of all time.

    Bun is the runtime of Claude Code, which is the core product of a trillion dollar company, which now sits on a vibe-coded app, where not a single person in the world has a proper mental model of.

    • ageitgey 16 hours ago
      I don't know, there's been some pretty bad software mistakes, possibly bigger than a PR to convert an app to Rust:

      https://en.wikipedia.org/wiki/Therac-25

    • applfanboysbgon 16 hours ago
      Claude Code itself is purely vibecoded, both CC and Bun leads are saying that humans are not writing code at Anthropic anymore. It is amazing how much money they intend to squander, because it's all funny money to them, investors just give it to them hand over fist for them to burn. Developing wrappers around the model isn't even the hard part and yet they're going to burn themselves to the ground getting high on their own supply.
      • NitpickLawyer 16 hours ago
        > Claude Code itself is purely vibecoded [...] money they intend to squander [...] going to burn themselves to the ground getting high on their own supply.

        This really really really isn't the burn you think it is. Going from 0 to 2B+ in revenue from a "purely vibecoded" thing is what they've said they're doing, and what they've actually done. Like in already done. It's not going back, no matter how many nuh nuh people write. They've already shown this can be done.

        People will continue to think that this is some sort of a gotcha. But it's actually precisely what they've done: they showed that dogfooding works. If this works, why not x y z?

        • applfanboysbgon 15 hours ago
          2B+ in revenue on hundreds of billions in investments and future commitments is completely worthless. Anybody can turn $100b into $2b, that's not a fucking accomplishment. And to the extent that something is driving any revenue, it is the model, not the TUI. Any success Claude is having is despite the godawful TUI, not because of it.
          • NitpickLawyer 15 hours ago
            claude.ai (their chatgpt equivalent) was nowhere before cc came about. CC was coded in a few weeks by people, then a few months by people + cc, then mostly cc take the wheel. It is without a doubt the main reason why they're successful. It is also the main reason why their coding models are as good as they are. They've incorporated the early data into their training recipes, and evolved model + harness together.
          • robryan 15 hours ago
            They appear to be lining up a funding round at a $900 billion dollar valuation. Or to be more conservative they already raised at $380 billion. A long way from worthless.
    • nicce 16 hours ago
      Maybe this is the best marketing trick for Claude Code ever. Maybe there was pressure from Anthropic to do this and prove the value. Even partial success is enough to prove the value, justify the value and usage, and AI dependency even further.
      • tarruda 16 hours ago
        And as long as Bun doesn't break Claude code, which only uses a subset of it's APIs, this might just pay out.
        • bflesch 8 hours ago
          Running the rust version in their prod for two weeks should be long enough to catch the biggest crashes and fix them. I'll be up to bug bounty hunters to find the big one that crashes all their app servers at once.
      • ares623 15 hours ago
        It only needs to survive long enough for the IPO
    • mapcars 16 hours ago
      On the other hand they might be super confident in the results, and if it goes well they might use is as an example of how good claude is
    • keyle 16 hours ago
      Won't touch it with a ten foot pole.
    • pixel_popping 16 hours ago
      Well, realistically as well, humans gave us softwares that are full of security holes (and bugs), which one have you seen that a human perfected on the first time around? Give AI some time as well to be fair.
    • IshKebab 16 hours ago
      My initial reaction was that this is pure insanity but in fairness this is a fairly 1:1 port of existing code, so the developer's mental model of it should still match fairly well.

      For instance look at this Zig function: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...

      Versus this Rust version: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...

      I did pick that at random but it does look like the best case. I skimmed through a lot of the Rust code and there's a surprisingly small amount of `unsafe`.

      Still pretty insane to merge this in such a short time with so little testing, but I can easily think of bigger software engineering mistakes. Hell it's not like Bun even needs to be commercially successful any more.

      • jwpapi 16 hours ago
        It’s still 400k more lines
        • IshKebab 13 hours ago
          Dunno where you got that number from but it's half that. Tokei says:

            ===============================================================================
             Language            Files        Lines         Code     Comments       Blanks
            ===============================================================================
             Zig                  1298       711112       577946        57772        75394
             Rust                 1443       931232       737485       114373        79374
          
          So it's 28% more lines of code (not comments/blanks).
          • travisgriggs 2 hours ago
            Rust is mostly ~20% bigger. Except comments. Where they basically doubled... what's with that?
    • dheatov 16 hours ago
      I for one am REALLY GLAD to see it consumes itself.
      • jwpapi 16 hours ago
        How life feels not using bun
  • scuff3d 6 hours ago
    Congratulations to everyone who uses Bun. You're now working as alpha testers for Anthropic... for free.

    Anyone using Bun should consider migrating away immediately. Not because of the LLM angle, but because of how insanely irresponsible this is.

    • lioeters 4 hours ago
      I reviewed the million lines of code added in a week, and I'm horrified. Not running that thing on my machine.
  • nDRDY 12 hours ago
    Giant slop-filled PR (that will power future slop-generation) has caused slop-coded Github to stop loading properly.

    The Anti-Singularity is approaching ever quicker!

    • parliament32 12 hours ago
      It's okay, at this rate Anthropic will be the only ones left using Bun.

      This is the Extinguish phase of the process, right?

  • kracket 8 hours ago
    The bun is down the drain.
  • veidr 12 hours ago
    We have hundreds of projects that run on Bun. (Some are Bun-specific for whatever reason, but most are "runtime-agnostic TypeScript code that runs on Bun, Node 24.2+, and Deno, but that means they run their test suites on Bun, in addition to the other two.)

    Out of curiosity, I installed the canary Bun and just ran a bunch of them. It didn't take me long to find one that works on stable Bun and crashes on "canary" Bun.

          schematic git:(main)  bun upgrade --canary
        [1.55s] Upgraded.
        
        Welcome to Bun's latest canary build!
        
        Report any bugs:
        
            https://github.com/oven-sh/bun/issues
        
        Changelog:
        
            https://github.com/oven-sh/bun/compare/0d9b296af...19d8ade2c
        
          schematic git:(main)  bun run main.ts serve
        Schematic Editor running at http://localhost:4200
        Bundled page in 25ms: src/web/index.html
        frontend TypeError: Cannot destructure property 'isLikelyComponentType' from null or undefined value
            at V0 (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:2534)
            at reactRefreshAccept (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:6090)
            at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:8766:27
            at CY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8973)
            at nY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:9285)
            (...more like this...)
            at m (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8773)
            at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6482
            at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6548
            from browser tab http://localhost:4200/
        ^C
          schematic git:(main)  bun upgrade --stable
        Downgrading from Bun 1.3.14-canary to Bun v1.3.14
        [2.02s] Upgraded.
        
        Welcome to Bun v1.3.14!
        
        What's new in Bun v1.3.14:
        
            https://bun.com/blog/release-notes/bun-v1.3.14
        
        Report any bugs:
        
            https://github.com/oven-sh/bun/issues
        
        Commit log:
        
            https://github.com/oven-sh/bun/compare/bun-v1.3.14...bun-v1.3.14
          schematic git:(main)  bun run main.ts serve
        Schematic Editor running at http://localhost:4200
        [browser] Version mismatch, hard-reloading
        Bundled page in 20ms: src/web/index.html
        
        # working fine as usual... ¯\_(ಠ_ಠ)_/¯
    
    I mean "passes test suite" is one thing. And a good thing. But... "doesn't break any (or even, say 99.5%) of the apps deployed around the world that are built on bun" is a pretty radically different thing.

    It's hard to feel like this is responsible behavior, but I will reserve judgement for now, and see how long they persist this "canary" phase.

    If they extend it for a lengthy period, and even like, fix bugs on the Zig version and the Rust "canary" version, then... I would be mollified to a great extent, since it is so easy to switch between the Zig stable version and the Rust canary version.

    As a pretty heavy user of Bun, I'm actually pretty psyched for it to switch to Rust... but given the abruptness and speed so far, I can't quite shake the "new AI dealer getting high on his own supply" vibe.

    But I hope they enter an intensive phase of prioritizing any and all "canary" bugs, and come out on the other side with a better product, and an even faster rate of improvement (which has honestly been pretty wild already).

    (Yes, of course, I will have my clanker file a bug report with repro... but that may take a few days.)

    • aapoalas 7 hours ago
      This bug was already reported very soon after the merge.
  • matt3210 6 hours ago
    Now pull the branch and roll your own bun without license issues (using an ai) against their test suite.
  • kracket 8 hours ago
    It's going to be absolute mess of total AI slop and black box that nobody understands and is going to cause more issues than it fixes.
    • vitaminCPP 8 hours ago
      Yep. How will we manage those 10x code projects, when LLMs cost increases by 10x?
    • philipbjorge 8 hours ago
      I've done some pretty incredible things with LLMs. If this were sqlite with its exhaustive test suite... OK, I can see it.

      It's hard for me to see this not becoming a pile of slop, but hey, maybe I'm wrong

  • the__alchemist 5 hours ago
    Bun alert!
  • SuperV1234 12 hours ago
    I'm bullish on LLM-assisted development but this is just a very stupid way of performing such a critical migration.
  • ares623 16 hours ago
    Anyone using Bun in production excited for this release? (other than Anthropic of course)
  • aiscoming 16 hours ago
    vibe coders keep saying that now you can have 100x productivity, that you can write a million lines of code in a week and do what would take a team of 10 experienced developers a year.

    where are all these million lines vibe coded projects? I don't see them. its all hype

    • andai 16 hours ago
      This PR appears to be over a million lines (though GitHub won't load for me).

      Of course the quality is the real question. I haven't had amazing results with LLMs with Rust, but they're less bad at it than they are at Zig, which is probably the reason for the rewrite.

      At least in this case the original code was written carefully by hand, so the design is sane, and now just the auto-translation is in question. Now it just needs to be battle tested.

    • yoyohello13 7 hours ago
      Bun is now the example. It's >1million lines of code, entirely vibe coded. All we do now is wait and see what happens.
      • FergusArgyll 5 hours ago
        Yeah, I believe op is using sarcasm (see username for one data point)
    • pixel_popping 16 hours ago
      Bun is now literally vibe-coded, that's your proof. And Bun developers will solely use LLMs at some point (pretty close to "vibe coding").
      • q3k 15 hours ago
        Show me some gold instead of a continuous stream of pickaxes.
  • dmitrijbelikov 16 hours ago
    Rust, Zig and TS went into a bun... /s
  • xiaod 6 hours ago
    [flagged]
  • npn 13 hours ago
    farewell, bun.
  • maipen 15 hours ago
    HN overreacting again.

    I trust Jarred to make the right decisions regarding bun, which seems to be his passion. Bun has always been amazing since i first tried it, it had some bugs along the way, which didn’t last long.

    Anything bad that comes from this, will simply be fixed.

    I hope more software does this and gets rid of their segmentation fault producing code, written in c++ and other unsafe languages

    I can think of a few.

    • sensanaty 13 hours ago
      It has 10k unsafe blocks, pretty sure those segfaults are still gonna be there
      • dolmen 12 hours ago
        Definitely. That's what a good translation is.

        But then, agents can work on removing each unsafe one by one and this will bubble issues.

  • aizk 12 hours ago
    I might not necessarily agree with the haste / stability of this, but I commend Jarred for pushing boundaries on what AI coding is capable of, can't deny that. 4 years ago this would've seemed like science fiction.