CPython 3.13 went further with an experimental copy-and-patch JIT compiler -- a lightweight JIT that stitches together pre-compiled machine code templates instead of generating code from scratch. It's not a full optimizing JIT like V8's TurboFan or a tracing JIT like PyPy's;
Good news. Python 3.15 adapts Pypy tracing approach to JIT and there are real performance gains now:
While this is great, I expected faster CPython to eventually culminate into what YJIT for Ruby is. I'm not sure the current approaches they are trying will get the ecosystem there.
I implemented most of the tracing JIT frontend in Python 3.15, with help from Mark to clean up and fix my code. I also coordinated some of the community JIT optimizer effort in Python 3.15 (note: NOT the code generator/DSL/infra, that's Mark, Diego, Brandt and Savannah). So I think I'm able to answer this.
I can't speak for everyone on the team, but I did try the lazy basic block versioning in YJIT in a fork of CPython. The main problem is that the copy-and-patch backend we currently have in CPython is not too amenable to self-modifying machine code. This makes inter-block jumps/fallthroughs very inefficient. It can be done, it's just a little strange. Also for security reasons, we tried not to have self-modifying code in the original JIT and we're hoping to stick to that. Everything has their tradeoffs---design is hard! It's not too difficult to go from tracing to lazy basic blocks. Conceptually they're somewhat similar, as the original paper points out. The main thing we lack is the compact per-block type information that something like YJIT/Higgs has.
I guess while I'm here I might as well make the distinction:
- Tracing is the JIT frontend (region selection).
- Copy and Patch is the JIT backend (code generation).
We currently use both. PyPy uses meta-tracing. It traces the runtime itself rather than the user's code in CPython's tracing case. I did take a look at PyPy's code, and a lot of ideas in the improved JIT are actually imported from PyPy directly. So I have to thank them for their great ideas. I also talk to some of the PyPy devs.
Ending off: the team is extremely lean right now. Only 2 people were generously employed by ARM to work on this full time (thanks a lot to ARM too!). The rest of us are mostly volunteers, or have some bosses that like open source contributions and allow some free time. As for me, I'm unemployed at the moment and this is basically my passion project. I'm just happy the JIT is finally working now after spending 2-3 years of my life on it :). If you go to Savannah's website [1], the JIT is around 100% faster for toy programs like Richards, and even for big programs like tomli parsing, it's 28% faster on macOS AArch64. The JIT is very much a community effort right now.
PS: If you want to see how the work has progressed, click "all time" in that website, it's pretty cool to see (lower is faster). I have a blog explaining how we made the JIT faster here https://fidget-spinner.github.io/posts/faster-jit-plan.html.
Thank you for your contributions to the Python ecosystem. It definitely is inspiring to see Python, the language to which I owe my career and interest in tech, grow into a performant language year by year. This would not have been possible without individuals like you.
In practice the ladder has two rungs for me. Write it in Python with numpy/scipy doing the heavy lifting, and if that's not enough, rewrite the hot path in C. The middle steps always felt like they added complexity without fully solving the problem.
The JIT work kenjin4096 describes is really promising though. If the tracing JIT in 3.15 actually sticks, a lot of this ladder just goes away for common workloads.
Jax seems quite interesting even from this point of view… numpy has the same problem as blas basically, right? The limited interface. Eventually this leads to heresies like daxpby, and where does the madness stop once you’ve allowed that sort of thing? Better to create some sort of array language.
I've been in the pandas (and now polars world) for the past 15 years. Staying in the sandbox gets most folks good enough performance. (That's why Python is the language of data science and ML).
I generally teach my clients to reach for numba first. Potentially lots of bang for little buck.
One overlooked area in the article is running on GPUs. Some numpy and pandas (and polars) code can get a big speedup by using GPUs (same code with import change).
> The real story is that Python is designed to be maximally dynamic -- you can monkey-patch methods at runtime, replace builtins, change a class's inheritance chain while instances exist -- and that design makes it fundamentally hard to optimize. ...
> 4 bytes of number, 24 bytes of machinery to support dynamism. a + b means: dereference two heap pointers, look up type slots, dispatch to int.__add__, allocate a new PyObject for the result (unless it hits the small-integer cache), update reference counts.
Would Python be a lot less useful without being maximally dynamic everywhere? Are there domains/frameworks/packages that benefit from this where this is a good trade-off?
I can't think of cases in strong statically typed languages where I've wanted something like monkey patching, and when I see monkey patching elsewhere there's often some reasonable alternative or it only needs to be used very rarely.
The dynamism exists to support the object model. That's the actual dependency. Monkey-patching, runtime class mutation, vtable dispatch. These aren't language features people asked for. They're consequences of building everything on mutable objects with identity.
Strip the object model. Keep Python.
You get most of the speed back without touching a compiler, and your code gets easier to read as a side effect.
I built a demo: Dishonest code mutates state behind your back; Honest code takes data in and returns data out. Classes vs pure functions in 11 languages, same calculation. Honest Python beats compiled C++ and Swift on the same problem. Not because Python is fast, but because the object model's pointer-chasing costs more than the Python VM overhead.
Don't take my word for it. It's dockerized and on GitHub. Run it yourself: honestcode.software, hit the Surprise! button.
I've always thought the flexibility should allow python to consume things like gRPC proto files or OpenAPI docs and auto-generate the classes/methods at runtime as opposed to using codegen tools. But as far as I know, there aren't any libraries out there actually doing that.
Generating code at runtime is often an anti-goal because you can’t easily introspect it. “Build-time” generation gives you that, but print often choose to go further and check the generated code to source control to be able to see the change history.
But for things like e.g. DAG systems, it would be great to be able to upload a new API definition and have it immediately available instead of having to recompile anything in the backend.
There are some use cases for very dynamic code, like ORMs; with descriptors you can add attributes + behavior at runtime and it's quite useful.
Anyways, breaking metaprogramming and more dynamic features would mean python 4 and we know how 2 -> 3 went. I also don't think it's where the core developers are going. Also also, there are other things I'd change before going after monkey patching like some scoping rules, mutable defaults in function attributes, better async ergonomics, etc.
Significant AI smell in this write up. As a result, my current reflex is to immediately stop reading. Not judgement on the actual analysis and human effort which went in. It’s just that the other context is missing.
I didn't notice any signs of AI writing until seeing this comment and re-reading (though I did notice it on the second pass).
That said, I think this article demonstrates that focusing on whether or not an article used AI might be focusing on the wrong “problem.” I appreciate being sensitive to the "smell" (the number of low-effort, AI posts flying around these days has made me sensitive too), but personally, I found this article both (1) easy to read and (2) insightful. I think the number of AI-written content lacking (2) is the problem.
I have the same issue now. It's especially annoying when it happens while reading a "serious" publication like a newspaper or long form magazine. Whether it was because an AI wrote it or "real" writers have spent so much time reading AI slop they've picked up the same style is kinda by the by. It all reads to me like SEO, which was the slop template that LLMs took their inspiration from, apparently. It just flattens language into the most exhausting version of it, where you need to try to subconsciously blank out all the unnecessary flourishes and weird hype phrases to try figure out what actually is trying to be said. I guess humans who learn to ignore it might to do better in this brave new world, but it's definitely annoying that humans are being forced to adapt to machines instead of the other way around.
I also seem to be developing an immune response to several slopisms. But the actual content is useful for outlining tradeoffs if you’re needing to make your Python code go faster.
I don't think it should be conflated with auto generated AI slop. I see a lot of snippets which were clearly manually written. I'm assuming the author used AI in a supervised manner, to smooth out the writing process and improve coherency.
Seeing Graal and Pypy beat the gcc C versions suggests to me there's something wrong with the C version. Perhaps they need a -march=native or there's something else wrong. The C version would be a different implementation in the benchmark game, but usually they are highly optimised.
Edit: looking at [1] the top C version uses x86 intrinsics, perhaps the article's writer had to find a slower implementation to have it running natively on his M4 Pro? It would be good to know which C version he used, there's a few at [1]. The N-body benchmark is one where they specify that the same algorithm must be used for all implementations.
The nbody sim at least is forced to use the same algorithm. It seems unlikely that an optimised pypy (non-BLAS) implementation beats an optimised C imp by 20x.
Great write up and recognisable performance. For a pipeline with many (~50) build dependencies unfortunately switching interpreter or experimenting with free threading is not an easy route as long as packages are not available (which is completely understandable).
I’m not one of these rewrite in Rust types, but some isolated jobs are just so well sorted for full control system programming that the rust delegation is worth the investment imo.
Another part worth investigating for IO bound pipelines is different multiprocessing techniques. We recently got a boost from using ThreadPoolExecutor over standard multiprocessing, and careful profiling to identify which tasks are left hanging and best allocated its own worker. The price you pay though is shared memory, so no thread safety, which only works if your pipeline can be staggered
the optimization ladder is just the five stages of grief but for python developers. denial ("it's fast enough"), anger ("why is this so slow"), bargaining ("maybe if I use numpy"), depression ("I should rewrite this in rust"), acceptance ("actually cython is fine")
People here on HN have in the past suggested that TypeScript is the superior-in-all-ways, just-as-easy/fun-to-code-in language and should replace Python in pretty much all use cases.
Anyone have an opinion on how TS would fare in this comparison?
All the approaches beyond PyPy are to either use a different lang that's superficially similar to python or to write a native extension for python in a different language, which is at odds with the stated premise.
Python is perfect as a "glue" language. "Inner Loops" that have to run efficiently is not where it shines, and I would write them in C or C++ and patch them with Python for access to the huge library base.
This is the "two language problem" ( I would like to hear from people who extensively used Julia by the way, which claims to solve this problem, does it really ?)
As a sibling comment mentions, yes it does. Just don’t expect to have code that runs as fast as C without some effort put into it. You still need to write your program in a static enough way to obtain those speed. It’s not the easiest thing in the world, since the tooling is, yes, improving but is still not there yet.
If you then want to access fully trimmed small executables then you have to start writing Julia similarly to how you write rust.
To me the fact that this is even possible blows my mind and I have tons of fun coding in it. Except when precompiling things. That is something that really needs to be addressed.
I have used Julia for my main language for years. Yes, it really does solve the two language problem. It really is as fast as C and as expressive as Python.
It then gives you a bunch of new problems. First and foremost that you now work in a niche language with fewer packages and fewer people who can maintain the code. Then you get the huge JIT latency. And deployment issues. And lack of static tooling which Rust and Python have.
For me, as a research software engineer writing performance sensitive code, those tradeoffs are worth it. For most people, it probably isn’t. But if you’re the kind of person who cares about the Python optimization ladder, you should look into Julia. It’s how I got hooked.
This problem has been solved already by Lisp, Scheme, Java, .NET, Eiffel, among others, with their pick and choose mix of JIT and AOT compiler toolchains and runtimes.
If you're patching hot paths with C and praying the interface layer doesn't explode, you can spend almost as much time chasing ABI boundary bugs as you save on perf. Type hints in Python are still docs for humans and maybe your LSP. Julia does address the two language problem in theory, but getting your whole stack and your deps to exist there is its own wierd pain, and people underplay how much library inertia matters once you leave numerics.
Kudos for going through all the existing JIT approaches, instead of reaching for rewrite into X straight away.
However if Rust with PyO3 is part of the alternatives, then Boost.Python, cppyy, and pybind11 should also be accounted for, given their use in HPC and HFT integrations.
Surprised Python is only 21x slower than C for tree traversal stuff. In my experience that's one of the most painful places to use Python. But maybe that's because I use numpy automatically when simple arrays are involved, and there's no easy path for trees.
Be careful with that, numpy arrays can be slower than Python tuples for some operations. The creation is always slower and the overhead has to be worth it.
You can turn trees into numpy-style matrix operations because graphs and matrices are two sides of the same coin. I don't see the code for the binary-tree benchmark in the repo to see how it's written, but there are libraries like graphblas that use the equivalence for optimization.
> Missing @cython.cdivision(True) inserts a zero-division check before every floating-point divide in the inner loop. Millions of branches that are never taken.
I thought never taken branches were essentially free. Does this mean something in the loop is messing with the branch predictor?
They're cheap but not free, especially at the front end of the CPU where it's just a lot more instructions to churn through. What the branch predictor gets you is it turns branches, which would normally cause a pipeline bubble, to be executed like straightline code if they're predicted right. It's a bit like a tracing jit. But you will still have a bunch of extra instructions to, like, compute the branch predicate.
Worse, IMO, is the never taken branch taking up space in branch prediction buffers. Which will cause worse predictions elsewhere (when this branch ip collides with a legitimate ip). Unless I missed a subtlety and never taken branches don’t get assigned any resources until they are taken (which would be pretty smart actually).
From when I was working on optimizing one or two things with Cython years ago, it wasn’t per-se the branch cost that hurt: it was impeding the compiler from various loop optimisations, potentially being the impediment from going all the way to auto-vectorisation.
That has been a thing forever, many "Python" libraries, are actually bindings to C, C++ and Fortran.
The culture of calling them "Python" is one reason why JITs are so hard to gain adoption in Python, the problem isn't the dynamism (see Smalltalk, SELF, Ruby,...), rather the culture to rewrite code in C, C++ and Fortran code and still call it Python.
Maybe with LLM/Code Assistance this effort reduces? Since we're mostly talking mathematics here, you have well defined algorithms that don't need to be "vibed". The codegen, hopefully, is consistent.
Go and Java/C# (if you forgo all the OOP nonsense) aren't much harder to write than Python, and you get far better performance. Not all the way to Rust level, bur close enough for most things with far less complexity.
As an AI engineer I kinda wish the community had landed on Go or something in the early days. C# would also be great, although it tends to be pretty verbose.
Python just has too strong network effects. In the early days it was between python and lua (anyone remember torchlua?). GoLang was very much still getting traction and in development.
Theres also the strong association of golang to google, c# to microsoft, and java to oracle...
Here is a python AST parser written in V. It's targeting a dialect that's mostly compatible with a static subset of python3, but will break compatibility where necessary. In this case pattern matching, possibly elsewhere.
Never heard of this language, but it looks interesting. Very modern, certainly. One thing that stood out to me is that there's apparently the ability to write a bare `for` loop...? Is that just equivalent to while (true) in other languages?
You know, I really should try out F# some time. I always preferred C# over Java, and I have some Scala experience (that wasn't overall very pleasant, but it was fun to use an FP language).
One thing with python is that usually I will use one of the many c based libraries to get reasonable speed and well thought out abstractions from the start. I architect around numpy, scipy, shapely, pandas/polars or whatever. So my code runs at reasonable speed from the start. But transpiling to rust then effectively means a complete redesign of the code, data structures, algorithms etc. And I have seen the AI tools really struggle to get it right, as my intent gets lost somewhere.
So what I do now (since Claude Code) is write really bare bones (and slow) pure python implementation (like I used to do for numba, pypy or cython ready code), with minimal dependencies. Then I use the REPL, notebooks and nice plotting tools to get a real understanding of the problem space and the intricacies of my algorithm/problem at hand. When done, I let Claude add tests and I ask it to transpile to equivalent Rust and boom! a flawless 1000x speed upgrade in a minutes.
The great thing is I don't need to do the mental gymnastics to vectorize code in a write only mode like I've had to do since my Matlab days. Instead I can write simple to read for loops that follow my intent much better, and result in much more legible code. So refreshing!
And with pyO3 i can still expose the Rust lib to python, and continue to use Python for glue and plotting
>The usual suspects are the GIL, interpretation, and dynamic typing. All three matter, but none of them is the real story. The real story is that Python is designed to be maximally dynamic -- you can monkey-patch methods at runtime, replace builtins, change a class's inheritance chain while instances exist -- and that design makes it fundamentally hard to optimize.
ok I guess the harder question is. Why isn't python as fast as javascript?
Beyond the economic arguments, there’s a lot in JS that actually makes it a lot easier: almost all of the operators can only return a subset of the types and cannot be overridden (e.g., the binary + operator in JS can only return a string or a number primitive), the existence of like string and number primitives dramatically reduce the amount of dynamic behaviour they can have, only proxy objects can exhibit the same amount of dynamism as arbitrary Python ones (and thus only they pay the performance cost)…
> ok I guess the harder question is. Why isn't python as fast as javascript?
Actually there is a pretty easy answer: worldwide, the amount of javascript being evaluated every day is many orders of magnitude higher than the amount of python. The amount of money available for optimizing it has thus been many orders of magnitude higher as well.
I don’t think the answer is that easy. Python is typically run on the server and JavaScript is client-side, which means that the incentives are aligned to optimize Python rather than JavaScript. I think investment in each follows and the difference is more that JavaScript runs in an isolated environment with a more flexible runtime.
Nah although his answer wasn’t exactly what I’m looking for he’s not wrong. Python is not optimized because on the backend you just switch to another language that’s faster. That’s the economically optimized thing to do. Can’t do that on the frontend.
A personal opinion: I would much prefer to read the rough, human version of this article than this AI-polished version. I'm interested in the content and the author clearly put thought and effort into it, but I'm constantly thrown out of it by the LLM smell. (I'm also a bit mad that `--` is now on the em dash treadmill and will soon be unusable.)
I'm not just saying this to vent. I honestly wonder if we could eventually move to a norm where people publish two versions of their writing and allow the reader to choose between them. Even when the original is just a set of notes, I would personally choose to make my own way through them.
json.loads is something you don't want to use in a loop if you care for performance at all. Just simple using orjson can give you 3x speed without the need to change anything.
The replacement of emdashes with double hyphens here is almost insulting. A look through the blog history suggests that the author has no issue writing in English normally, and nothing seems really off about the actual findings here (or even the speculation about causes etc.), so I really can't understand the motivation for LLM-generated prose. (The author's usual writing style appears to have some arguable LLM-isms, but they make a lot more sense in context and of course those patterns had to come from somewhere. The overall effect is quite different.)
Edit: it's strange to get downvoted while also getting replies that agree with me and don't seem to object.
(Also, I thought it wasn't supposed to be possible to edit after getting a reply?)
Yea while reading, I just didn't understand how you end up LLM writing the article? Clearly, the data and writeup are real. But, was it "edited" with an LLM? It looks closer to ~the entire thing being LLM written. I finished reading because the topic is interesting, but the LLM writing style is difficult to bear.. and I agree with your point that trying to fool us that it's human with `--` is just absurd
Shockingly good article — correct identification of the root cause of performance issues being excessive dynamism and ranking of the solutions based on the value/effort ratio. Excellent taste. Will keep this in my back pocket as a quick Python optimization reference.
It's just somewhat unfortunate that I have to question every number and fact presented since the writing was clearly at least somewhat AI-assisted with the author seemingly not being upfront about that at all.
Being upfront about AI-assistance or no AI-assistance doesn't mean shit. Whether AI was involved is independent of what they state and there's no real way to fully prove otherwise.
Because for 99% of cases python is fast enough and it's fast as fuck to code. And for the 1% that aren't, you have 50 different flavors of making it faster. And the final of which is "slap pybind on a c module to do the hot path in C" which then lets you minimize the suffering of C into a single high value location. And the rest of the code still gets to be Python.
In my experience it's no faster than other better languages like Go, Rust or Kotlin.
> And for the 1% that aren't, you have 50 different flavors of making it faster.
Only for numerical code. You can't use something like Numpy to make Django or Mercurial faster.
And even when you could feasibly do the thing that everyone says to do - move part of your code to a faster language - the FFI is so painful (it always is) that you are much better just doing everything in that faster language from the start.
All of the effort you have to go through to make Python not slow is far less work than just "don't use Python". You can write Rust without thinking about performance and it will automatically be 20-200x faster than Python.
I actually did rewrite a Python project 1:1 in Rust once and it was approximately 50x faster. I put no effort into optimising the Rust code.
> the reference implementation of language is slow
Despite its content, this blogpost also pushes this exact "language slow" thinking in its preamble. I don't think nearly enough people read past introductions for that to be a responsible choice or a good idea.
The only thing worse than this is when Python specifically is outright taught (!) as an "interpeted language", as if an implementation-detail like that was somehow a language property. So grating.
While I sympathize (and have said similar in the past), language design can (and in Python's case certainly does) hinder optimization quite a bit. The techniques that are purely "use a better implementation" get you not much further than PyPy. Further benefits come from cross-compilation that requires restricting access to language features (and a system that can statically be convinced that those features weren't used!), or indeed straight up using code written in a different language through an FFI.
But yes, the very terminology "interpreted language" was designed for a different era and is somewhere between misleading and incomprehensible in context. (Not unlike "pass by value".)
Absolutely, no doubt about that. I just find it a terrible way to approach from in general, as well as specifically in this case: swapping out CPython with PyPy, GraalPy, Taichi, etc. - as per the post - requires no code changes, yet results in leaps and bounds faster performance.
If switching runtimes yields, say, 10x perf, and switching languages yields, say, 100x, then the language on its own was "just" a 10x penalty. Yet the presentation is "language is 100x slower". That's my gripe. And these are apparently conservative estimates as per the tables in the OP.
Not that metering "language performance" with numbers would be a super meaningful exercise to begin with, but still. The fact that most people just go with CPython does not escape me either. I do wonder though if people would shop for alternative runtimes more if the common culture was more explicitly and dominantly concerned with the performance of implementations, rather than of languages.
The problem is a lot of software is written to be run by people other than the developer, and usually you don't get a say in the implementation in those cases.
I must admit that I'm amused by the people who find the writeup useful but are turned off by the AI "smell". And look forward to the day when all valued content reeks of said "smell"; let's see what detractors-for-no-good-reason do then (yes I'm a bit ticked by the attitude).
Isn't this a depressing thought? Regardless of AI, to think that everything we read would come in the same literary style, conveying little of the author, giving no window through which to learn about who they are -- that would be a real loss.
Ultimately it’s up to the author to make that explicit choice. I think that AI does and will enhance writing and depth and breadth of analysis one could perform. But, to be trustworthy, people will need to either lay out all cards on the table and/or work on other ways to gain trust over time. Maybe people need to provide some context to communicate what model was used and in which ways. What % of final output is AI vs author. I mean, if I see 100% composed by human author stated somewhere then there’s my cue to at the very least learn a little about the author. Certainly more complexity and discernment for readers. Depressing? In some ways maybe; but I’m kind of optimistic. Imagine what Tolkien could worldbuild armed with AI.. but then it wouldn’t be Tolkien.
I find the style so reflexively grating that it's honestly hard for me to imagine others not being bothered by it, let alone being bothered by others being bothered.
Especially since I looked at previous posts on the blog and they didn't have the same problem.
The smell makes me suspicious because I don’t know how the author used AI.
If the author wrote a detailed rough draft, had AI edit, reviewed the output thoroughly, and has the domain knowledge to know if the AI is correct, then this could be a useful piece.
I suspect most authors _don’t_ fall in that bucket.
"I totally get a kick out of the peeps who find the writeup super helpful yet are totally put off by that distinct "AI smell"—it’s like they can't even! Just imagine when everything we value is woven into a tapestry of that same "smell"—where will all the naysayers retreat to then? It’s a little frustrating, honestly, and I’m just like, come on! Let’s delve into this new era of content and embrace the chaos!"
https://github.com/python/cpython/issues/139109
https://doesjitgobrrr.com/?goals=5,10
I can't speak for everyone on the team, but I did try the lazy basic block versioning in YJIT in a fork of CPython. The main problem is that the copy-and-patch backend we currently have in CPython is not too amenable to self-modifying machine code. This makes inter-block jumps/fallthroughs very inefficient. It can be done, it's just a little strange. Also for security reasons, we tried not to have self-modifying code in the original JIT and we're hoping to stick to that. Everything has their tradeoffs---design is hard! It's not too difficult to go from tracing to lazy basic blocks. Conceptually they're somewhat similar, as the original paper points out. The main thing we lack is the compact per-block type information that something like YJIT/Higgs has.
I guess while I'm here I might as well make the distinction:
- Tracing is the JIT frontend (region selection).
- Copy and Patch is the JIT backend (code generation).
We currently use both. PyPy uses meta-tracing. It traces the runtime itself rather than the user's code in CPython's tracing case. I did take a look at PyPy's code, and a lot of ideas in the improved JIT are actually imported from PyPy directly. So I have to thank them for their great ideas. I also talk to some of the PyPy devs.
Ending off: the team is extremely lean right now. Only 2 people were generously employed by ARM to work on this full time (thanks a lot to ARM too!). The rest of us are mostly volunteers, or have some bosses that like open source contributions and allow some free time. As for me, I'm unemployed at the moment and this is basically my passion project. I'm just happy the JIT is finally working now after spending 2-3 years of my life on it :). If you go to Savannah's website [1], the JIT is around 100% faster for toy programs like Richards, and even for big programs like tomli parsing, it's 28% faster on macOS AArch64. The JIT is very much a community effort right now.
[1]: https://doesjitgobrrr.com/?goals=5,10
PS: If you want to see how the work has progressed, click "all time" in that website, it's pretty cool to see (lower is faster). I have a blog explaining how we made the JIT faster here https://fidget-spinner.github.io/posts/faster-jit-plan.html.
The JIT work kenjin4096 describes is really promising though. If the tracing JIT in 3.15 actually sticks, a lot of this ladder just goes away for common workloads.
I've been in the pandas (and now polars world) for the past 15 years. Staying in the sandbox gets most folks good enough performance. (That's why Python is the language of data science and ML).
I generally teach my clients to reach for numba first. Potentially lots of bang for little buck.
One overlooked area in the article is running on GPUs. Some numpy and pandas (and polars) code can get a big speedup by using GPUs (same code with import change).
https://github.com/taichi-dev/taichi_benchmark
> 4 bytes of number, 24 bytes of machinery to support dynamism. a + b means: dereference two heap pointers, look up type slots, dispatch to int.__add__, allocate a new PyObject for the result (unless it hits the small-integer cache), update reference counts.
Would Python be a lot less useful without being maximally dynamic everywhere? Are there domains/frameworks/packages that benefit from this where this is a good trade-off?
I can't think of cases in strong statically typed languages where I've wanted something like monkey patching, and when I see monkey patching elsewhere there's often some reasonable alternative or it only needs to be used very rarely.
Strip the object model. Keep Python.
You get most of the speed back without touching a compiler, and your code gets easier to read as a side effect.
I built a demo: Dishonest code mutates state behind your back; Honest code takes data in and returns data out. Classes vs pure functions in 11 languages, same calculation. Honest Python beats compiled C++ and Swift on the same problem. Not because Python is fast, but because the object model's pointer-chasing costs more than the Python VM overhead.
Don't take my word for it. It's dockerized and on GitHub. Run it yourself: honestcode.software, hit the Surprise! button.
In python3.14 the support is there, but 2 years ago you could just import this library and it would just work normally.
Believe it or not, when you write a blog post in a different language, it really helps to use an LLM, even just to fix your grammar mistakes etc.
I assume that’s most likely what happened here too.
I have no problem with people using AI, especially to close a language gap.
If you disclose your usage I have a _lot_ more trust that effort has been put into the writing despite the usage
> The remaining difference is noise, not a fundamental language gap. The real Rust advantage isn't raw speed -- it's pipeline ownership.
I’m scarred to detect these things by my own AI usage.
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
That said, I think this article demonstrates that focusing on whether or not an article used AI might be focusing on the wrong “problem.” I appreciate being sensitive to the "smell" (the number of low-effort, AI posts flying around these days has made me sensitive too), but personally, I found this article both (1) easy to read and (2) insightful. I think the number of AI-written content lacking (2) is the problem.
Edit: looking at [1] the top C version uses x86 intrinsics, perhaps the article's writer had to find a slower implementation to have it running natively on his M4 Pro? It would be good to know which C version he used, there's a few at [1]. The N-body benchmark is one where they specify that the same algorithm must be used for all implementations.
[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
I’m not one of these rewrite in Rust types, but some isolated jobs are just so well sorted for full control system programming that the rust delegation is worth the investment imo.
Another part worth investigating for IO bound pipelines is different multiprocessing techniques. We recently got a boost from using ThreadPoolExecutor over standard multiprocessing, and careful profiling to identify which tasks are left hanging and best allocated its own worker. The price you pay though is shared memory, so no thread safety, which only works if your pipeline can be staggered
Anyone have an opinion on how TS would fare in this comparison?
This is the "two language problem" ( I would like to hear from people who extensively used Julia by the way, which claims to solve this problem, does it really ?)
If you then want to access fully trimmed small executables then you have to start writing Julia similarly to how you write rust.
To me the fact that this is even possible blows my mind and I have tons of fun coding in it. Except when precompiling things. That is something that really needs to be addressed.
It then gives you a bunch of new problems. First and foremost that you now work in a niche language with fewer packages and fewer people who can maintain the code. Then you get the huge JIT latency. And deployment issues. And lack of static tooling which Rust and Python have.
For me, as a research software engineer writing performance sensitive code, those tradeoffs are worth it. For most people, it probably isn’t. But if you’re the kind of person who cares about the Python optimization ladder, you should look into Julia. It’s how I got hooked.
However if Rust with PyO3 is part of the alternatives, then Boost.Python, cppyy, and pybind11 should also be accounted for, given their use in HPC and HFT integrations.
> Missing @cython.cdivision(True) inserts a zero-division check before every floating-point divide in the inner loop. Millions of branches that are never taken.
I thought never taken branches were essentially free. Does this mean something in the loop is messing with the branch predictor?
The culture of calling them "Python" is one reason why JITs are so hard to gain adoption in Python, the problem isn't the dynamism (see Smalltalk, SELF, Ruby,...), rather the culture to rewrite code in C, C++ and Fortran code and still call it Python.
Python just has too strong network effects. In the early days it was between python and lua (anyone remember torchlua?). GoLang was very much still getting traction and in development.
Theres also the strong association of golang to google, c# to microsoft, and java to oracle...
Some nuance: try transpiling to a garbage collected rust like language with fast compilation until you have millions of users.
Also use a combination of neural and deterministic methods to transpile depending on the complexity.
I don't know what languages you might have in mind. "Rust-like" in what sense?
V-lang is the one I'm tinkering with. It's like rust in terms of pattern matching as an expression, sum types, ?T instead of exceptions.
Like golang, it has shorter compile times.
I try to keep my argument abstract (that you need to lower python to something intermediate before rust) for that reason.
https://github.com/py2many/v-ast
I don't have a whole lot of experience hand writing v-lang. Mostly machine generated from static python.
But I find it convenient for what it does. Golang that is less verbose, single binary for distribution and fewer tokens if you're using an agent.
GitHub.com/LadybugDB/ladybug-vlang has a CLI I recently wrote with an agent for a database I maintain.
Static python with design by contract can be a stronger specification than natural language. @antirez was discussing this on his social media today.
If going to complain about some of those being slow, remeber that they have various options between interpreter, bytecode, REPL, JIT and AOT.
So what I do now (since Claude Code) is write really bare bones (and slow) pure python implementation (like I used to do for numba, pypy or cython ready code), with minimal dependencies. Then I use the REPL, notebooks and nice plotting tools to get a real understanding of the problem space and the intricacies of my algorithm/problem at hand. When done, I let Claude add tests and I ask it to transpile to equivalent Rust and boom! a flawless 1000x speed upgrade in a minutes.
The great thing is I don't need to do the mental gymnastics to vectorize code in a write only mode like I've had to do since my Matlab days. Instead I can write simple to read for loops that follow my intent much better, and result in much more legible code. So refreshing!
And with pyO3 i can still expose the Rust lib to python, and continue to use Python for glue and plotting
I wish someone writes a stdlib without using it. My attempt from a few months ago in a repo under the py2many org.
[0]: https://www.muna.ai/ [1]: https://docs.muna.ai/predictors/create
ok I guess the harder question is. Why isn't python as fast as javascript?
Actually there is a pretty easy answer: worldwide, the amount of javascript being evaluated every day is many orders of magnitude higher than the amount of python. The amount of money available for optimizing it has thus been many orders of magnitude higher as well.
I'm not just saying this to vent. I honestly wonder if we could eventually move to a norm where people publish two versions of their writing and allow the reader to choose between them. Even when the original is just a set of notes, I would personally choose to make my own way through them.
Edit: it's strange to get downvoted while also getting replies that agree with me and don't seem to object.
(Also, I thought it wasn't supposed to be possible to edit after getting a reply?)
It's just somewhat unfortunate that I have to question every number and fact presented since the writing was clearly at least somewhat AI-assisted with the author seemingly not being upfront about that at all.
In my experience it's no faster than other better languages like Go, Rust or Kotlin.
> And for the 1% that aren't, you have 50 different flavors of making it faster.
Only for numerical code. You can't use something like Numpy to make Django or Mercurial faster.
And even when you could feasibly do the thing that everyone says to do - move part of your code to a faster language - the FFI is so painful (it always is) that you are much better just doing everything in that faster language from the start.
All of the effort you have to go through to make Python not slow is far less work than just "don't use Python". You can write Rust without thinking about performance and it will automatically be 20-200x faster than Python.
I actually did rewrite a Python project 1:1 in Rust once and it was approximately 50x faster. I put no effort into optimising the Rust code.
> looks inside
> the reference implementation of language is slow
Despite its content, this blogpost also pushes this exact "language slow" thinking in its preamble. I don't think nearly enough people read past introductions for that to be a responsible choice or a good idea.
The only thing worse than this is when Python specifically is outright taught (!) as an "interpeted language", as if an implementation-detail like that was somehow a language property. So grating.
But yes, the very terminology "interpreted language" was designed for a different era and is somewhere between misleading and incomprehensible in context. (Not unlike "pass by value".)
If switching runtimes yields, say, 10x perf, and switching languages yields, say, 100x, then the language on its own was "just" a 10x penalty. Yet the presentation is "language is 100x slower". That's my gripe. And these are apparently conservative estimates as per the tables in the OP.
Not that metering "language performance" with numbers would be a super meaningful exercise to begin with, but still. The fact that most people just go with CPython does not escape me either. I do wonder though if people would shop for alternative runtimes more if the common culture was more explicitly and dominantly concerned with the performance of implementations, rather than of languages.
How can you suppose that this is not a good reason to object, especially days after https://news.ycombinator.com/item?id=47340079 ?
I find the style so reflexively grating that it's honestly hard for me to imagine others not being bothered by it, let alone being bothered by others being bothered.
Especially since I looked at previous posts on the blog and they didn't have the same problem.
If the author wrote a detailed rough draft, had AI edit, reviewed the output thoroughly, and has the domain knowledge to know if the AI is correct, then this could be a useful piece.
I suspect most authors _don’t_ fall in that bucket.
There, FTFY :D