I was surprised to see that Java was slower than C++, but the Java code is run with `-XX:+UseSerialGC`, which is the slowest GC, meant to be used only on very small systems, and to optimise for memory footprint more than performance. Also, there's no heap size, which means it's hard to know what exactly is being measured. Java allows trading off CPU for RAM and vice-versa. It would be meaningful if an appropriate GC were used (Parallel, for this batch job) and with different heap sizes. If the rules say the program should take less than 8GB of RAM, then it's best to configure the heap to 8GB (or a little lower). Also, System.gc() shouldn't be invoked.
Don't know if that would make a difference, but that's how I'd run it, because in Java, the heap/GC configuration is an important part of the program and how it's actually executed.
Of course, the most recent JDK version should be used (I guess the most recent compiler version for all languages).
It’s so hard to actually benchmark languages because it so much depends on the dataset, I am pretty sure with simdjson and some tricks I could write C++ (or Rust) that could top the leaderboard (see some of the techniques from the billion row challenge!).
tbh for silly benchmarks like this it will ultimately be hard to beat a language that compiles to machine code, due to jit warmup etc.
It’s hard to due benchmarks right, for example are you testing IO performance? are OS caches flushed between language runs? What kind of disk is used etc? Performance does not exist in a vacuum of just the language or algorithm.
Why are you surprised? Java always suffers from abstraction penalty for running on a VM. You should be surprised (and skeptical) if Java ever beats C++ on any benchmark.
You're right that Java lacks inline types (although it's getting them really soon, now), but the main cost of that isn't because of stack allocation (because heap allocations in Java don't cost much more than stack allocations), but because cache misses due to objects not being inlined in arrays.
No, Java's existing compiler is very good, and it generates as good code as you'd want. There is definitely still a cost due to objects not being inlined in arrays yet (this will change soon) that impacts some programs, but in practice Java performs more-or-less the same as C++.
In this case, however, it appears that the Java program may have been configured in a suboptimal way. I don't know how much of an impact it has here, but it can be very big.
Even benchmarks that allow for jit warmup consistently show java roughly half the speed of c/c++/rust. Is there something they are doing wrong? I've seen people write some really unusual java to eliminate all runtime allocations, but that was about latency, not throughput.
For the most naive code, if you're calling "new" multiple times per row, maybe Java benefits from out of band GC while C++ calls destructors and free() inline as things go out of scope?
Of course, if you're optimizing, you'll reuse buffers and objects in either language.
yes, but that's just one part of the equation. machine code from compiler and/or language A is not necessarily the same as the machine code from compiler and/or language B. the reasons are, among others, contextual information, handling of undefined behavior and memory access issues.
you can compile many weakly typed high level languages to machine code and their performance will still suck.
java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).
> java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).
This isn't really true - at least not beyond some marginal things that are of little consequence - and in fact, Java's compiler has access to more context than pretty much any AOT compiler because it's a JIT and is allowed to speculate optimisations rather than having to prove them.
It can speculate whether an optimization is performant. Not whether it is sound. I don't know enough about java to say that it doesn't provide all the same soundness guarantees as other languages, just that it is possible for a jit language to be hampered by this. Also c# aot is faster than a warmed up c# jit in my experience, unless the warmup takes days, which wouldn't be useful for applications like games anyway.
The study seems to be “solve this the obvious way, don’t think too hard about it”. Then the systems languages (C, Zig, C++) are pretty close, the GC languages are around an order of magnitude slower (C#, Java
doing pretty good at ca. 3x), and the scripting languages around two orders of magnitude slower.
But note the HO-variants: with better algorithms, you can shave off two orders of magnitude.
So if you’re open to thinking a bit harder about the problem, maybe your badly benchmarking language is just fine after all.
I was very surprised to see the results for common lisp. As I scrolled down I just figured that the language was not included until I saw it down there. I would have guessed SBCL to be much faster. I checked it out locally and got: Rust 9ms, D: 16ms, and CL: 80ms.
Looking at the implementation, only adding type annotations, there was a ~10% improvement. Then the tag-map using vectors as values which is more appropriate than lists (imo) gave a 40% improvement over the initial version. By additionally cutting a few allocations, the total time is halved. I'm guessing other languages will have similar easy improvements.
D gets no respect. It's a solid language with a lot of great features and conveniences compared to C++ but it barely gets a passing mention (if that) when language discussions pop up. I'd argue a lot of the problems people have with C++ are addressed with D but they have no idea.
Ecosystem isn't that great, and much of it relies on the GC. If you're going to move out of C++, you might as well go all in on a GC language (Java, C#, Go) or use Rust. D's value proposition isn't enough to compete with those languages.
D has a GC and it’s optional. Which should be the best of both worlds in theory.
Also D is older than Go and Rust and only a few months younger than C#. So the question then becomes “why weren’t people using D when your recommended alternatives weren’t an option?” Or “why use the alternatives (when they were new) when D already exists?”
This is only true in the most technical sense: you can easily opt-out of the GC, but you will struggle with the standard library, and probably most third-party libraries too. It's the baseline assumption after all, hence why it's opt-out, not opt-in. There was a DConf talk about the future of Phobos which indicated increased support for @nogc, but this is a ways away, and even then. If you're opting-out of the GC, you are giving up a lot. And honestly, if you really don't want the GC, you may be better off with Zig.
Garbage collection has never been a major issue for most use cases. However, the Phobos vs. Tango and D1 vs. D2 splits severely slowed D’s adoption, causing it to miss the golden window before C++11, Go, and Rust emerged.
I don't really get the idea that LLMs lower the level of familiarity one needs to have with a language.
A standup comedian from Australia should not assume that the audience in the Himalayas is laughing because the LLM the comedian used 20 minutes before was really good at translating the comedian's routine.
But I suppose it is normal for developers to assume that a compiler translated their Haskell into x86_64 instructions perfectly, then turned around and did the same for three different flavors of Arm instructions. So why shouldn't an LLM turn piles of oral descriptions into perfectly architected Nim?
For some reason I don't feel the same urgency to double-check the details of the Arm instructions as I feel about inspecting the Nim or Haskell or whatever the LLM generated.
If the difference in performance between the target language and C++ is huge, it's probably not the language that's great, but some quirk of implementation.
C# is very fast (see multicore rating). Implementation based on simd (vector), memory spans, stackalloc, source generators and what have you — modern C# allows you go very low-level and very fast.
Probably even faster under .net 10.
Though using stopwatch for benchmark is killing me :-) Wonder if multiple runs via benchmarkdotnet would show better times (also due to jit optimizations). For example, Java code had more warm-up iterations before measuring
This entire benchmark is frankly a joke. As other commenters have pointed out, the compiler flags make no sense, they use pretty egregious ways to measure performance, and ancient versions are being used across the board. Worst of all, the code quality in each sample is extremely variable and some are _really_ bad.
Provided the correct result is generated I don't get the rationale for this one. As long as you obey the other rule for UTF-8 compatibility, why would it be a problem to represent as bytes (or anything else)?
Seems like it would put e.g. GC'ed languages where strings are immutable at a big disadvantage
About the C++ version: You have to be an absolute weirdo to (sometimes) put the opening brace of functions on the same line, but on the next line for if and for bodies.
I think there was a name for that brace style? It seems silly, but leaving c++ development after decades for a variety of reasons, it turned out a standard formatting tool was one of my favorite features.
I mean this is only meant to be an iteration if I understand correctly. Its not like someone is going around citing this benchmark yelling rewrite everything in Julia / D. Imo this is a good starting point if you are doubtful or fall into the trap of Java is not fast. For most workloads we can clearly see, Java trades off the control of C++ for "about the same speed" and much much larger and well managed ecosystem. (Except for the other day, when someones OpenJDK PR was left hanging for a month which I am not sure why).
Quality does vary wildly because the languages vary wildly in terms of language constructs and standard libraries. Proficiency in every.single.language. used in the benchmark perhaps should not be taken for granted.
But it is an GitHub repository and the repository owner appears to accept PR's and allows people to raise an issue to provide their feedback, or… it can be forked and improved upon. Feel free to jump in and contribute to make it a better benchmark that will not be «frankly a joke» or «_really_ bad».
I'm completely alright with just having fun and hosting your own little sandboxes online, but what good does it do to post and share this with others in its current state? The picture it paints is certainly not representative, and this sort of thing has been done a million times over with much better consistency. Again, I think it's great to hack around in every language and document your journey all the way, but sharing this is borderline misinformation. It's certainly not my duty to right the wrongs of this benchmark.
The fact that Julia “highly optimized” is 30x faster than the normal Julia implementation, yet still fails to reach for some pretty obvious optimizations, and uses a joke package called “SuperDataStructures” tells me that maybe this benchmark shouldn’t be taken all that seriously.
Benchmarks like this can still be fun and informative
(Given credits to both sources in the description of this repo)
(Also fair disclosure but it was generated just out of curiosity of how this benchmark data might look if it was on benjdd's ui and I used LLM's for this use case for prototyping purposes. The result looks pretty simiar imo for visualization so full credits to benjdd's awesome visualization, I just wanted this to be in that to see for myself but ended up having it open source/on github pages)
I think benjdd's on hackernews too so hi ben! Your websites really cool!
Someone replied to me in an old comment that for fast Python you have to use numpy. In the folder there is a program in plain python, another with numpy and another with numba. I'm not sure why only one is shown in the data.
Disclaimer: I used numpy and numba, but my level is quite low. Almost as if I just type `import numpy as np` and hope the best.
For what it's worth, I've ported a lot of heavily optimized numpy code to Julia for work, and consistently gotten 10x-100x speedups, largely due to how much easier it is to control memory allocations and parallelize more effectively.
It's not really surprising given the implementations. The C# stdlib just exposes more low-level levers here (quick look, correct me if I'm wrong):
For one, the C# code is explicitly using SIMD (System.Numerics.Vector) to process blocks, whereas Go is doing it scalar. It also uses a read-only FrozenDictionary which is heavily optimized for fast lookups compared to a standard map.
Parallel.For effectively maps to OS threads, avoiding the Go scheduler's overhead (like preemption every few ms) which is small but still unnecessary for pure number crunching. But a bigger bottleneck is probably synchronization: The Go version writes to a channel in every iteration. Even buffered, that implies internal locking/mutex contention. C# is just writing to pre-allocated memory indices on unrelated disjoint chunks, so there's no synchronization at all.
The quality of the benchmark code is... not great. This seems like Zig written by someone who doesn't know Zig or asked Claude to write it for them. Hell, actually Claude might do a better job here.
In short, I wouldn't trust these results for anything concrete. If you're evaluating which language is a better fit for your problem, craft your own benchmark tailored for that problem instead.
Modern c# has many low level knobs (still in a safe way; though it also supports unsafe) for zero allocation, hardware intrinsics, devirtualization of calls at runtime, etc.: simd (vector), memory spans, stackalloc, source generators (helps with very efficient json), etc.
Most of all: C# has a very nice framework and tooling (Rider).
I wrote a script (now an app basically haha) to migrate data from EMR #1 to EMR #2 and I chose Nim because it feels like Python but it's fast as hell. Claude Code did a fine job understanding and writing Nim especially when I gave it more explicit instructions in the system prompt.
Genuine question: Are GitHub workflows stable enough to be used for benchmarking? Like CPU time quantum scheduling is guaranteed to be the same from run to run?
I will have a look, but R has much better data structures than Python for data processing (everything is a vector in R)
EDIT: they have one script related.R in their repo, which is 3 years old, and uses jsonlite as a package which is notoriously slow. Using a package such as yyjsonr yields 10x performance, so something tells me what whoever wrote this piece of code has never heard of R before.
That only applies in an apples-to-apples comparison, i.e., same data structures, same algorithm, etc.
You can't compare sorting in C and Python, but use bubble sort in C and radix sort in Python.
In here there are different data structures being used.
> D[HO] and Julia [HO] footnote: Uses specialized datastructures meant for demonstration purposes: more ↩ ↩2
You're right of course but it also depends on how long you want to spend on it. If Python gives you radix sort directly and the C implementation you can have with the same time is bubble sort because you spent much time setting up the project and finding the right libs it kinda makes sense.
Don't know if that would make a difference, but that's how I'd run it, because in Java, the heap/GC configuration is an important part of the program and how it's actually executed.
Of course, the most recent JDK version should be used (I guess the most recent compiler version for all languages).
tbh for silly benchmarks like this it will ultimately be hard to beat a language that compiles to machine code, due to jit warmup etc.
It’s hard to due benchmarks right, for example are you testing IO performance? are OS caches flushed between language runs? What kind of disk is used etc? Performance does not exist in a vacuum of just the language or algorithm.
I think this harness actually uses JMH, which measures after warmup.
https://wiki.c2.com/?SufficientlySmartCompiler
In this case, however, it appears that the Java program may have been configured in a suboptimal way. I don't know how much of an impact it has here, but it can be very big.
Of course, if you're optimizing, you'll reuse buffers and objects in either language.
you can compile many weakly typed high level languages to machine code and their performance will still suck.
java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).
This isn't really true - at least not beyond some marginal things that are of little consequence - and in fact, Java's compiler has access to more context than pretty much any AOT compiler because it's a JIT and is allowed to speculate optimisations rather than having to prove them.
But note the HO-variants: with better algorithms, you can shave off two orders of magnitude.
So if you’re open to thinking a bit harder about the problem, maybe your badly benchmarking language is just fine after all.
Looking at the implementation, only adding type annotations, there was a ~10% improvement. Then the tag-map using vectors as values which is more appropriate than lists (imo) gave a 40% improvement over the initial version. By additionally cutting a few allocations, the total time is halved. I'm guessing other languages will have similar easy improvements.
Also D is older than Go and Rust and only a few months younger than C#. So the question then becomes “why weren’t people using D when your recommended alternatives weren’t an option?” Or “why use the alternatives (when they were new) when D already exists?”
This is only true in the most technical sense: you can easily opt-out of the GC, but you will struggle with the standard library, and probably most third-party libraries too. It's the baseline assumption after all, hence why it's opt-out, not opt-in. There was a DConf talk about the future of Phobos which indicated increased support for @nogc, but this is a ways away, and even then. If you're opting-out of the GC, you are giving up a lot. And honestly, if you really don't want the GC, you may be better off with Zig.
But popularity/awareness/ecosystem matter.
Especially with Nim it's so easy to make quality libraries with a Codex/ClaudeCode and a couple hours as a hobby.
Especially when they run fast. I just made Metal bindings and got 120 FPS demos with SDF bitmaps running yesterday while eating Saturday brunch.
A standup comedian from Australia should not assume that the audience in the Himalayas is laughing because the LLM the comedian used 20 minutes before was really good at translating the comedian's routine.
But I suppose it is normal for developers to assume that a compiler translated their Haskell into x86_64 instructions perfectly, then turned around and did the same for three different flavors of Arm instructions. So why shouldn't an LLM turn piles of oral descriptions into perfectly architected Nim?
For some reason I don't feel the same urgency to double-check the details of the Arm instructions as I feel about inspecting the Nim or Haskell or whatever the LLM generated.
Probably even faster under .net 10.
Though using stopwatch for benchmark is killing me :-) Wonder if multiple runs via benchmarkdotnet would show better times (also due to jit optimizations). For example, Java code had more warm-up iterations before measuring
> Must: Represent tags as strings
Provided the correct result is generated I don't get the rationale for this one. As long as you obey the other rule for UTF-8 compatibility, why would it be a problem to represent as bytes (or anything else)?
Seems like it would put e.g. GC'ed languages where strings are immutable at a big disadvantage
This can obviously be true for toy problems, but tends not to generalize.
But it is an GitHub repository and the repository owner appears to accept PR's and allows people to raise an issue to provide their feedback, or… it can be forked and improved upon. Feel free to jump in and contribute to make it a better benchmark that will not be «frankly a joke» or «_really_ bad».
Benchmarks like this can still be fun and informative
https://niklas-heer.github.io/speed-comparison
Certainly does "look" very interesting.
Nowadays whenever I see benchmarks of different languages. I really compare it to benjdd.com/languages or benjdd.com/languages2
Ended up creating a visualization of this data if anybody's interested
https://serjaimelannister.github.io/data-processing-benchmar...
(Given credits to both sources in the description of this repo)
(Also fair disclosure but it was generated just out of curiosity of how this benchmark data might look if it was on benjdd's ui and I used LLM's for this use case for prototyping purposes. The result looks pretty simiar imo for visualization so full credits to benjdd's awesome visualization, I just wanted this to be in that to see for myself but ended up having it open source/on github pages)
I think benjdd's on hackernews too so hi ben! Your websites really cool!
Disclaimer: I used numpy and numba, but my level is quite low. Almost as if I just type `import numpy as np` and hope the best.
As do we all. If you browse through deep learning code a large majority is tensor juggling.
For one, the C# code is explicitly using SIMD (System.Numerics.Vector) to process blocks, whereas Go is doing it scalar. It also uses a read-only FrozenDictionary which is heavily optimized for fast lookups compared to a standard map. Parallel.For effectively maps to OS threads, avoiding the Go scheduler's overhead (like preemption every few ms) which is small but still unnecessary for pure number crunching. But a bigger bottleneck is probably synchronization: The Go version writes to a channel in every iteration. Even buffered, that implies internal locking/mutex contention. C# is just writing to pre-allocated memory indices on unrelated disjoint chunks, so there's no synchronization at all.
In short, I wouldn't trust these results for anything concrete. If you're evaluating which language is a better fit for your problem, craft your own benchmark tailored for that problem instead.
Although it is very single-thread biased test.
Most of all: C# has a very nice framework and tooling (Rider).
EDIT: they have one script related.R in their repo, which is 3 years old, and uses jsonlite as a package which is notoriously slow. Using a package such as yyjsonr yields 10x performance, so something tells me what whoever wrote this piece of code has never heard of R before.
In here there are different data structures being used.
> D[HO] and Julia [HO] footnote: Uses specialized datastructures meant for demonstration purposes: more ↩ ↩2