22 comments

  • pron 15 hours ago
    I was surprised to see that Java was slower than C++, but the Java code is run with `-XX:+UseSerialGC`, which is the slowest GC, meant to be used only on very small systems, and to optimise for memory footprint more than performance. Also, there's no heap size, which means it's hard to know what exactly is being measured. Java allows trading off CPU for RAM and vice-versa. It would be meaningful if an appropriate GC were used (Parallel, for this batch job) and with different heap sizes. If the rules say the program should take less than 8GB of RAM, then it's best to configure the heap to 8GB (or a little lower). Also, System.gc() shouldn't be invoked.

    Don't know if that would make a difference, but that's how I'd run it, because in Java, the heap/GC configuration is an important part of the program and how it's actually executed.

    Of course, the most recent JDK version should be used (I guess the most recent compiler version for all languages).

    • rockwotj 13 hours ago
      It’s so hard to actually benchmark languages because it so much depends on the dataset, I am pretty sure with simdjson and some tricks I could write C++ (or Rust) that could top the leaderboard (see some of the techniques from the billion row challenge!).

      tbh for silly benchmarks like this it will ultimately be hard to beat a language that compiles to machine code, due to jit warmup etc.

      It’s hard to due benchmarks right, for example are you testing IO performance? are OS caches flushed between language runs? What kind of disk is used etc? Performance does not exist in a vacuum of just the language or algorithm.

      • pron 11 hours ago
        > due to jit warmup

        I think this harness actually uses JMH, which measures after warmup.

    • KerrAvon 12 hours ago
      Why are you surprised? Java always suffers from abstraction penalty for running on a VM. You should be surprised (and skeptical) if Java ever beats C++ on any benchmark.
      • pron 11 hours ago
        The only "abstraction penalty" of "running on a VM" (by which I think you mean using a JIT compiler), is the warmup time of waiting for the JIT.
        • xigoi 4 hours ago
          The true penalty of Java is that product types have to be heap-allocated, as there is no mechanism for stack-allocated product types.
          • pron 3 hours ago
            You're right that Java lacks inline types (although it's getting them really soon, now), but the main cost of that isn't because of stack allocation (because heap allocations in Java don't cost much more than stack allocations), but because cache misses due to objects not being inlined in arrays.
        • andersmurphy 8 hours ago
          Its a statement of our times that this is getting down voted. JIT is so underrated.
        • stefs 5 hours ago
          in my opinion, this assertion suffers from the "sufficiently smart compiler" fallacy somewhat.

          https://wiki.c2.com/?SufficientlySmartCompiler

          • pron 3 hours ago
            No, Java's existing compiler is very good, and it generates as good code as you'd want. There is definitely still a cost due to objects not being inlined in arrays yet (this will change soon) that impacts some programs, but in practice Java performs more-or-less the same as C++.

            In this case, however, it appears that the Java program may have been configured in a suboptimal way. I don't know how much of an impact it has here, but it can be very big.

            • galangalalgol 1 hour ago
              Even benchmarks that allow for jit warmup consistently show java roughly half the speed of c/c++/rust. Is there something they are doing wrong? I've seen people write some really unusual java to eliminate all runtime allocations, but that was about latency, not throughput.
          • sswatson 5 hours ago
            The linked article makes a specific carveout for Java, on the grounds that its SufficientlySmartCompiler is real, not hypothetical.
          • remexre 5 hours ago
            c++ certainly also has and needs a similarly sufficiently smart compiler to be compiled at all…
      • woooooo 12 hours ago
        For the most naive code, if you're calling "new" multiple times per row, maybe Java benefits from out of band GC while C++ calls destructors and free() inline as things go out of scope?

        Of course, if you're optimizing, you'll reuse buffers and objects in either language.

      • cryptos 6 hours ago
        In the end, even Java code becomes machine code at some point (at least the hot paths).
        • stefs 5 hours ago
          yes, but that's just one part of the equation. machine code from compiler and/or language A is not necessarily the same as the machine code from compiler and/or language B. the reasons are, among others, contextual information, handling of undefined behavior and memory access issues.

          you can compile many weakly typed high level languages to machine code and their performance will still suck.

          java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).

          • pron 3 hours ago
            > java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).

            This isn't really true - at least not beyond some marginal things that are of little consequence - and in fact, Java's compiler has access to more context than pretty much any AOT compiler because it's a JIT and is allowed to speculate optimisations rather than having to prove them.

            • galangalalgol 1 hour ago
              It can speculate whether an optimization is performant. Not whether it is sound. I don't know enough about java to say that it doesn't provide all the same soundness guarantees as other languages, just that it is possible for a jit language to be hampered by this. Also c# aot is faster than a warmed up c# jit in my experience, unless the warmup takes days, which wouldn't be useful for applications like games anyway.
  • debois 1 hour ago
    The study seems to be “solve this the obvious way, don’t think too hard about it”. Then the systems languages (C, Zig, C++) are pretty close, the GC languages are around an order of magnitude slower (C#, Java doing pretty good at ca. 3x), and the scripting languages around two orders of magnitude slower.

    But note the HO-variants: with better algorithms, you can shave off two orders of magnitude.

    So if you’re open to thinking a bit harder about the problem, maybe your badly benchmarking language is just fine after all.

  • XJ6w9dTdM 11 hours ago
    I was very surprised to see the results for common lisp. As I scrolled down I just figured that the language was not included until I saw it down there. I would have guessed SBCL to be much faster. I checked it out locally and got: Rust 9ms, D: 16ms, and CL: 80ms.

    Looking at the implementation, only adding type annotations, there was a ~10% improvement. Then the tag-map using vectors as values which is more appropriate than lists (imo) gave a 40% improvement over the initial version. By additionally cutting a few allocations, the total time is halved. I'm guessing other languages will have similar easy improvements.

  • jhack 14 hours ago
    D gets no respect. It's a solid language with a lot of great features and conveniences compared to C++ but it barely gets a passing mention (if that) when language discussions pop up. I'd argue a lot of the problems people have with C++ are addressed with D but they have no idea.
    • maleldil 7 hours ago
      Ecosystem isn't that great, and much of it relies on the GC. If you're going to move out of C++, you might as well go all in on a GC language (Java, C#, Go) or use Rust. D's value proposition isn't enough to compete with those languages.
      • hnlmorg 6 hours ago
        D has a GC and it’s optional. Which should be the best of both worlds in theory.

        Also D is older than Go and Rust and only a few months younger than C#. So the question then becomes “why weren’t people using D when your recommended alternatives weren’t an option?” Or “why use the alternatives (when they were new) when D already exists?”

        • Defletter 5 hours ago
          > D has a GC and it’s optional.

          This is only true in the most technical sense: you can easily opt-out of the GC, but you will struggle with the standard library, and probably most third-party libraries too. It's the baseline assumption after all, hence why it's opt-out, not opt-in. There was a DConf talk about the future of Phobos which indicated increased support for @nogc, but this is a ways away, and even then. If you're opting-out of the GC, you are giving up a lot. And honestly, if you really don't want the GC, you may be better off with Zig.

      • chenzhekl 5 hours ago
        Garbage collection has never been a major issue for most use cases. However, the Phobos vs. Tango and D1 vs. D2 splits severely slowed D’s adoption, causing it to miss the golden window before C++11, Go, and Rust emerged.
    • rsyring 12 hours ago
      Could say the same for Nim.

      But popularity/awareness/ecosystem matter.

      • elcritch 8 hours ago
        That's the great thing about LLMs.

        Especially with Nim it's so easy to make quality libraries with a Codex/ClaudeCode and a couple hours as a hobby.

        Especially when they run fast. I just made Metal bindings and got 120 FPS demos with SDF bitmaps running yesterday while eating Saturday brunch.

        • freeopinion 57 minutes ago
          I don't really get the idea that LLMs lower the level of familiarity one needs to have with a language.

          A standup comedian from Australia should not assume that the audience in the Himalayas is laughing because the LLM the comedian used 20 minutes before was really good at translating the comedian's routine.

          But I suppose it is normal for developers to assume that a compiler translated their Haskell into x86_64 instructions perfectly, then turned around and did the same for three different flavors of Arm instructions. So why shouldn't an LLM turn piles of oral descriptions into perfectly architected Nim?

          For some reason I don't feel the same urgency to double-check the details of the Arm instructions as I feel about inspecting the Nim or Haskell or whatever the LLM generated.

    • Ygg2 12 hours ago
      If the difference in performance between the target language and C++ is huge, it's probably not the language that's great, but some quirk of implementation.
  • piskov 13 hours ago
    C# is very fast (see multicore rating). Implementation based on simd (vector), memory spans, stackalloc, source generators and what have you — modern C# allows you go very low-level and very fast.

    Probably even faster under .net 10.

    Though using stopwatch for benchmark is killing me :-) Wonder if multiple runs via benchmarkdotnet would show better times (also due to jit optimizations). For example, Java code had more warm-up iterations before measuring

  • von_lohengramm 13 hours ago
    This entire benchmark is frankly a joke. As other commenters have pointed out, the compiler flags make no sense, they use pretty egregious ways to measure performance, and ancient versions are being used across the board. Worst of all, the code quality in each sample is extremely variable and some are _really_ bad.
    • dwroberts 3 hours ago
      Some of the rules seem very arbitrary too

      > Must: Represent tags as strings

      Provided the correct result is generated I don't get the rationale for this one. As long as you obey the other rule for UTF-8 compatibility, why would it be a problem to represent as bytes (or anything else)?

      Seems like it would put e.g. GC'ed languages where strings are immutable at a big disadvantage

    • ahartmetz 3 hours ago
      About the C++ version: You have to be an absolute weirdo to (sometimes) put the opening brace of functions on the same line, but on the next line for if and for bodies.
      • galangalalgol 58 minutes ago
        I think there was a name for that brace style? It seems silly, but leaving c++ development after decades for a variety of reasons, it turned out a standard formatting tool was one of my favorite features.
        • ahartmetz 14 minutes ago
          For mixing styles like that?

            int myFunc(int foo){
                if (foo > 42)
                {
                    frobnicate();
                }
            }
    • another_twist 13 hours ago
      I mean this is only meant to be an iteration if I understand correctly. Its not like someone is going around citing this benchmark yelling rewrite everything in Julia / D. Imo this is a good starting point if you are doubtful or fall into the trap of Java is not fast. For most workloads we can clearly see, Java trades off the control of C++ for "about the same speed" and much much larger and well managed ecosystem. (Except for the other day, when someones OpenJDK PR was left hanging for a month which I am not sure why).
      • nnevatie 6 hours ago
        If you get the same speeds for C++ and Java, I'd like to point out that the C++ implementation is likely very sub-optimal.

        This can obviously be true for toy problems, but tends not to generalize.

    • inkyoto 10 hours ago
      Quality does vary wildly because the languages vary wildly in terms of language constructs and standard libraries. Proficiency in every.single.language. used in the benchmark perhaps should not be taken for granted.

      But it is an GitHub repository and the repository owner appears to accept PR's and allows people to raise an issue to provide their feedback, or… it can be forked and improved upon. Feel free to jump in and contribute to make it a better benchmark that will not be «frankly a joke» or «_really_ bad».

      • von_lohengramm 7 hours ago
        I'm completely alright with just having fun and hosting your own little sandboxes online, but what good does it do to post and share this with others in its current state? The picture it paints is certainly not representative, and this sort of thing has been done a million times over with much better consistency. Again, I think it's great to hack around in every language and document your journey all the way, but sharing this is borderline misinformation. It's certainly not my duty to right the wrongs of this benchmark.
  • jakobnissen 6 hours ago
    The fact that Julia “highly optimized” is 30x faster than the normal Julia implementation, yet still fails to reach for some pretty obvious optimizations, and uses a joke package called “SuperDataStructures” tells me that maybe this benchmark shouldn’t be taken all that seriously.

    Benchmarks like this can still be fun and informative

  • stu2421 5 hours ago
    For comparison here's one from Dec '25

    https://niklas-heer.github.io/speed-comparison

    Certainly does "look" very interesting.

  • hgs3 9 hours ago
    Why is there no C benchmark? The C++ benchmark appears to be "modern C++" which isn't a substitute.
  • Imustaskforhelp 15 hours ago
    This is really interesting. Julia is a beast compared to python.

    Nowadays whenever I see benchmarks of different languages. I really compare it to benjdd.com/languages or benjdd.com/languages2

    Ended up creating a visualization of this data if anybody's interested

    https://serjaimelannister.github.io/data-processing-benchmar...

    (Given credits to both sources in the description of this repo)

    (Also fair disclosure but it was generated just out of curiosity of how this benchmark data might look if it was on benjdd's ui and I used LLM's for this use case for prototyping purposes. The result looks pretty simiar imo for visualization so full credits to benjdd's awesome visualization, I just wanted this to be in that to see for myself but ended up having it open source/on github pages)

    I think benjdd's on hackernews too so hi ben! Your websites really cool!

    • gus_massa 13 hours ago
      Someone replied to me in an old comment that for fast Python you have to use numpy. In the folder there is a program in plain python, another with numpy and another with numba. I'm not sure why only one is shown in the data.

      Disclaimer: I used numpy and numba, but my level is quite low. Almost as if I just type `import numpy as np` and hope the best.

      • SatvikBeri 9 hours ago
        For what it's worth, I've ported a lot of heavily optimized numpy code to Julia for work, and consistently gotten 10x-100x speedups, largely due to how much easier it is to control memory allocations and parallelize more effectively.
      • another_twist 13 hours ago
        > Almost as if I just type `import numpy as np` and hope the best.

        As do we all. If you browse through deep learning code a large majority is tensor juggling.

  • aatd86 13 hours ago
    Isn't that measuring the speed of json encoding instead?
  • matthewfcarlson 13 hours ago
    I see some questions around the methodology of the testing. But is this representative of Ruby? Several minutes total when most finish under a second?
  • gethly 10 hours ago
    Go being beaten by C# in multicore is quite hard to believe. Also Zig and Odin doing so "poorly" in single core is strange.
    • kdps 1 hour ago
      It's not really surprising given the implementations. The C# stdlib just exposes more low-level levers here (quick look, correct me if I'm wrong):

      For one, the C# code is explicitly using SIMD (System.Numerics.Vector) to process blocks, whereas Go is doing it scalar. It also uses a read-only FrozenDictionary which is heavily optimized for fast lookups compared to a standard map. Parallel.For effectively maps to OS threads, avoiding the Go scheduler's overhead (like preemption every few ms) which is small but still unnecessary for pure number crunching. But a bigger bottleneck is probably synchronization: The Go version writes to a channel in every iteration. Even buffered, that implies internal locking/mutex contention. C# is just writing to pre-allocated memory indices on unrelated disjoint chunks, so there's no synchronization at all.

      • freeopinion 47 minutes ago
        In other words the benchmark doesn't even use the same hardware for each run?
    • osmsucks 6 hours ago
      The quality of the benchmark code is... not great. This seems like Zig written by someone who doesn't know Zig or asked Claude to write it for them. Hell, actually Claude might do a better job here.

      In short, I wouldn't trust these results for anything concrete. If you're evaluating which language is a better fit for your problem, craft your own benchmark tailored for that problem instead.

    • piskov 3 hours ago
      Modern c# has many low level knobs (still in a safe way; though it also supports unsafe) for zero allocation, hardware intrinsics, devirtualization of calls at runtime, etc.: simd (vector), memory spans, stackalloc, source generators (helps with very efficient json), etc.

      Most of all: C# has a very nice framework and tooling (Rider).

    • DeathArrow 4 hours ago
      Go is beaten constantly by C# in both Benchmark Game and Techempower benchmarks.
  • jasonjmcghee 11 hours ago
    What's up with the massive jump for 20k to 60k for nearly all languages?
    • foota 11 hours ago
      My guess would be cache related. 5k probably fits in L1-L2 cache, whereas 20k might put you into L3.
  • pyrolistical 13 hours ago
    That’s odd zig concurrent got slower
    • another_twist 13 hours ago
      Contention overhead likely. Performance is more than just the langauge.
      • pyrolistical 11 hours ago
        Also 3 years old. Zig has been rewritten in that time
  • sergiotapia 11 hours ago
    I wrote a script (now an app basically haha) to migrate data from EMR #1 to EMR #2 and I chose Nim because it feels like Python but it's fast as hell. Claude Code did a fine job understanding and writing Nim especially when I gave it more explicit instructions in the system prompt.
  • KerrAvon 12 hours ago
    Genuine question: Are GitHub workflows stable enough to be used for benchmarking? Like CPU time quantum scheduling is guaranteed to be the same from run to run?
    • vlovich123 10 hours ago
      No, it’s sloppy benchmarking
  • ekianjo 14 hours ago
    Data processing benchmark but somehow R is not even mentioned?
    • mcdermott 13 hours ago
      It would be the slowest language result on the list.
      • ekianjo 10 hours ago
        Slower than Python? I seriously doubt that
        • mcdermott 6 hours ago
          Port the script to R, benchmark and report your results. Python is slow, but R is generally much slower.
          • ekianjo 23 minutes ago
            I will have a look, but R has much better data structures than Python for data processing (everything is a vector in R)

            EDIT: they have one script related.R in their repo, which is 3 years old, and uses jsonlite as a package which is notoriously slow. Using a package such as yyjsonr yields 10x performance, so something tells me what whoever wrote this piece of code has never heard of R before.

  • Vaslo 15 hours ago
    So in the D vs Zig vs Rust vs C fight - learn d if speed is your thing?
    • hiccuphippo 1 hour ago
      Don't know about D but C, Zig and Rust use LLVM so there should be no difference.
    • Ygg2 12 hours ago
      That only applies in an apples-to-apples comparison, i.e., same data structures, same algorithm, etc. You can't compare sorting in C and Python, but use bubble sort in C and radix sort in Python.

      In here there are different data structures being used.

      > D[HO] and Julia [HO] footnote: Uses specialized datastructures meant for demonstration purposes: more ↩ ↩2

      • makapuf 6 hours ago
        You're right of course but it also depends on how long you want to spend on it. If Python gives you radix sort directly and the C implementation you can have with the same time is bubble sort because you spent much time setting up the project and finding the right libs it kinda makes sense.