The python people don't adhere to this principle and maybe I need to give up on it, I'm just sick of this crap.
Those of us of a certain vintage will recall the tumult of python-2 to -3. Suddenly, production code needed significant re-writing. We got an arguably better python out of it, but oh! the pain.
In 3.14, (among many other things) the python developers decided to make 'forkserver' the default instead of 'fork' for the Process() method (this is for starting a child process - https://docs.python.org/3/library/multiprocessing.html). Why on earth break our code in such a wanton way? Why not leave the default alone - there was always the option to use 'forkserver' if one wanted it. Or maybe they could have created a new entrypoint with the new behaviour Process_fastserver() or some such? Oh no! Just break it and make their customers patch furiously!
When we adopt a language, we like to think that what runs today will run tomorrow - C and bash programs that I wrote 30 years ago still run. Not with python - if you use it, buckle up and make sure your regression tests are thorough, it'll be a rough ride.
Move slow and break things, perhaps?
Just a list of bad/wrong decisions IMHO:
- Reference counting instead of using a real garbage collector
- The pyproject.toml format is under-specified, comes decades too late for a problem that has been solved good enough by Apache Maven more than 2 decades ago
- The absolutely weak support for functional programming, which then was patched by list comprehensions and co later
- venv and other solutions to isolate dependencies for a project
Python is successful because of the community support and a binding to almost everything, and this sadly outweighs a lot of poor choices from the language designers and the implementation. I just always feel frustrated, that during the great breakup from 2-3, they didn't try to fix more of the known issues (which again, have been solved by other communities decades before) instead of breaking the world and still managing to half ass it.
fork() is very hard to get right as it keeps a lot of state around from the parent process you have to cleanly discard...
With fork, you could pass objects that couldn't be pickled (lambdas, local functions, file handles, database connections). With forkserver, everything must be pickleable. That alone breaks thousands of repos of code.
You can no longer work with global module-level objects, so it fundamentally changes how scoping rules work.
It launches a server with some extra machinery at runtime - startup cost and hidden complexity just snuck its your app without you knowing.
forkserver may be technically a better choice. But that's irrelevant. Changing the default breaks existing code.
Forkserver is probably a better default, inheriting file handles, globals, and sockets leads to a bunch of subtle bugs - I'm not sure that's even a good feature, also ymmv.
And fork() is still available, so if it breaks things, the solution would be to explicitly ask for fork() - and I'd say for most casual uses of multiprocessing, a user won't know one way or the other which is what I meant by transparent.
The more I use Python in a professional environment the less I think it is suited for professional environments.