This might offend some people but even Linus Torvalds thinks that the ABI compatibility is not good enough in Linux distros, and this is one of the main reasons Linux is not popular on the desktop. https://www.youtube.com/watch?v=5PmHRSeA2c8&t=283s
Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself; it is just wrapped in a set of very stable userland exposures (Win32, UWP, etc.), and it’s those exposures that Windows executables are relying on. A theoretical Windows PE binary that was 100% statically linked (and so directly contained NT syscalls) wouldn’t be at-all portable between different Windows versions.
Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.
I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.
(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)
> Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself
This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.
> makes me wonder how exactly Windows containers work
I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.
Ask your friend if he would CC0 the quote or similar (not sure if its possible but like) I can imagine this being a quote on t-shirts xD
Honestly I might buy a T-shirt with such a quote.
I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem
AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks
AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps.
This is really cool. Looks like it has a way for me to use my own dynamic linker and glibc version *.
At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).
> At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).
Looks like you met the right guy because I have built this tool :)
Allow me to show my project, Appseed (https://nanotimestamps.org/appseed): It's a simple fish script which I had (prototyped with Claude) some 8-10 months ago I guess to solve exactly this.
I have a youtube video in the website and the repository is open source on github too.
So this actually worked fantastic for a lot of different binaries that I tested it on and I had uploaded it on hackernews as well but nobody really responded, perhaps this might change it :p
Now what appseed does is that you can think of it is that it can take a binary and convert it into two folders (one is the dynamic library part) and the other is the binary itself
So you can then use something like tar to package it up and run it anywhere. I can of course create it into a single elf-64 as well but I wanted to make it more flexible so that we can have more dynamic library like or perhaps caching or just some other ideas and this made things simple for me too
Ldshim is really good idea too although I think I am unable to understand it for the time being but I will try to understand it I suppose. I would really appreciate it if you can tell me more about Ldshim! Perhaps take a look at Appseed too and I think that there might be some similarities except I tried to just create a fish script which can just convert any dynamic binary usually into a static one of sorts
I just want more people to take ideas like appseed or zapp's and run with it to make linux's ecosystem better man. Because I just prototyped it with LLM's to see if it was possible or not since I don't have much expertise in the area. So I can only imagine what can be possible if people who have expertise do something about it and this was why I shared it originally/created it I guess.
Let me know if you are interested in discussing anything about appseed. My memory's a little rusty about how it worked but I would love to talk about it if I can be of any help :p
Interesting. I've had a hell of a time building AppImages for my apps that work on Fedora 43. I've found bug reports of people with similar challenges, but it's bizarre because I use plenty of AppImages on F43 that work fine. I wonder if this might be a clue
I can only speak for Flatpak, but I found its packaging workflow and restricted runtime terrible to work with. Lots of undocumented/hard to find behaviour and very painful to integrate with existing package managers (e.g. vcpkg).
Yeah, flatpak has some good ideas, and they're even mostly well executed, but once you start trying to build your own flatpaks or look under the hood there's a lot of "magic". (Examples: Where do runtimes come from? I couldn't find any docs other than a note that says to not worry about it because you should never ever try to make your own, and I couldn't even figure out the git repos that appear to create the official ones. How do you build software? Well, mostly you plug it into the existing buildsystems and hope that works, though I mostly resorted to `buildsystem: simple` and doing it by hand.) For bonus points, I'm pretty sure 1. flatpaks are actually pretty conceptually simple; the whole base is in /usr and the whole app is in /app and that's it, and 2. the whole thing could have been a thin wrapper over docker/podman like x11docker taken in a slightly different direction.
You can build your own flatpak by wrapping bwrap, because that is what Flatpak does. Flatpak seems to have some "convenience things" like the various *-SDK packages, but I don't know how much convenience that provides.
The flatpak ecosystem is problematic in that most packages are granted too much rights by default.
Maybe it's better now in some distros. Not sure about other distros, but I don't like Ubuntu's Snap package. Snap packages typically start slower, use more RAM, require sudo privileges to install, and run in an isolated environment only on systems with AppArmour. Snap also tends to slow things some at boot and shutdown. People report issues like theming mismatches, permissions/file-access friction. Firefox theming complaints are a common example. It's almost like running a docker container for each application. Flatpaks seem slightly better, but still a bandaid. Just nobody is going to fix the compatibility problems in Linux.
I think he still considers this to be the case. He was interviewed on Linus tech tips recently. And he bemoaned in passing the terrible application ecosystem on Linux.
It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on.
I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh.
ABI is a far larger concept than the kernel UAPI. Remember that the OS includes a lot of things in userspace as well. Many of these things are not even stable between the various contemporary Linux distros, let alone older versions of them. This might include dbus services, fs layout, window manager integration, and all sorts of other things.
I agree 100% with Linus. I can run a WinXP exe on Win10 or 11 almost every time, but on Linux I often have to chase down versions that still work with the latest Mint or Ubuntu distros. Stuff that worked before just breaks, especially if the app isn’t in the repo.
You can also run a WinXP exe on any Linux distribution almost every time. That's the point of project and Linus' quip: The only stable ABI around on MS Windows and Linux is Win32 (BTW, I do not agree with this.)
I think it's not unlikely that we reach reach a point in a couple of decades where we are all developing win32 apps that most people are running some form of linux.
We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.
Yes and even the package format thing is a hell of its own. Even on Ubuntu you have multiple package formats and sometimes there are even multiple app stores (a Gnome one and an Ubuntu specific if I remember correctly)
Even open-source software has to deal with the moving target that is ABI and API compatibility on Linux. OpenSSL’s API versioning is a nightmare, for example, and it’s the most critical piece of software to dynamically link (and almost everything needs a crypto/SSL library).
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
So every Linux distribution should compile and distribute packages for every single piece of open source software in existence, both the very newest stuff that was only released last week, and also everything from 30+ years ago, no matter how obscure.
Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.
Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;
"I followed your instructions and it doesn't run".
Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.
Distribution as source fails because there are too many unknown, and dependent parts.
Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.
Yep. But docker doesn’t help you with desktop apps. And everything becomes so big!
I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.
People don’t seem to mind downloading a 30mb executable, so long as it actually works.
This might be why OpenBSD looks attractive to some. Its kernel and all the different applications are fully integrated with each other -- no distros! It also tries to be simple, I believe, which makes it more secure and overall less buggy.
To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like
tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
"operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as:
ZFS (which implements in a very well-engineered manner a tree-like data storage that's
been standard since the '60s) can serve as a founation for
Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an
anarchy of packaging systems, breaking upgrades and updates,
unstable glibc, desktop environments that are different and changing seemingly
for the sake of it, sound that's kept breaking, power management iffiness, etc.
The difference is that you can statically link GTK+, and it'll work. You can't statically link glibc, if you want to be able to resolve hostnames or users.
GTK update schedule is very slow, and you can run multiple major versions of GTK on the same computer, it's not the right argument. When people says GTK backwards compatibility is bad, they are referring in particular to its breaking changes between minor versions. It was common for themes and apps to break (or work differently) between minor versions of GTK+ 3, as deprecations were sometimes accompanied with the breaking of the deprecated code. (anyway, before Wayland support became important people stuck to GTK+ 2 which was simple, stable, and still supported at the time; and everyone had it installed on their computer alongside GTK+ 3).
Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.
We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease.
The problem is not the APIs, it's symbol versions. You will routinely get loader errors when running software compiled against a newer glibc than what a system provides, even if the caller does not use any "new" APIs.
glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.
Yes, so that's why freezing the glibc symbol versions would help. If everybody uses the same version, you cannot get conflicts (at least after it has rippled through and everybody is on the same version). The downside is that we can't add anything new to glibc, but I'd say given all the trouble it produces, that's worth accepting. We can still add bugfixes and security fixes to glibc, we just don't change the APIs of the symbols.
In principle you can patch your binary to accept the old local version, though I don't remember ever getting it to work right. Anyway here it is for the brave or foolhardy, here's the gist:
Someone please create a windows 7 like user interface or even XP like interface too and you got yourself a serious fan
I might seriously recommend it to newbies and like there is just this love I have for windows 7 even though I really didn't use it for much but its so much more elegant in its own way than windows 10
like it can be a really fun experiment and I would be interested to see how that would pan out.
It stuns me that a polished 1:1 2K/XP/7 clone DE (which it mimics is a setting) hasn’t existed for a 10y+ already. It’s such an obvious target for a mass appeal Linux desktop that many techies and non-techies alike would happily use.
Rough approximations have been possible since the early 2000s, but they’re exactly that: rough approximations. Details matter, and when I boot up an old XP/7 box there are aspects in which they feel more polished and… I don’t know, finished? Complete? Compared to even the big popular DEs like KDE.
Building a DE explicitly as a clone of a specific fixed environment would also do wonders to prevent feature creep and encourage focus on fixing bugs and optimization instead of bells and whistles, which is something that modern software across the board could use an Everest sized helping of.
Yea, you raise some good points. Perhaps your comment/this discussion can help someone be interested in this. I am clearly not educated about DE creation so much but I am sure that some people might create this
I think one of the friction could be ideological if not than anything since most linux'ers love Open source and hate windows so they might not want to build anything which even replicates the UI perhaps
Listen I hate windows just as much as the other guy but gotta give props that I feel nostalgic to windows 7, and if they provide both .exe perfect support and linux binary perfect support, things can be really good. I hope somebody does it and perhaps even adds it to loss32, would be an interesting update.
XFCE plus a windows theme would get you pretty far. Is there anything specific you're thinking of which that plus some pre-configured Wine wouldn't hit?
Pro tip but if someone wants to create their own iso as well, they can probably just customize things imperatively in MxLinux even by just booting them up in your ram and then they have the magnificient option of basically snapshotting it and converting that into an iso so its definitely possible to create an iso tweaked down to your configuration without any hassle (trust me but its the best way to create iso's without too much hassle and if one wants hassle, nix or bootc seems to be the way to go)
Regarding Why it wouldn't hit. I don't know, I already build some of my own iso's and I can build one for windows (on MxLinux principle) and upload it for free on huggingface perhaps but the idea is of mass appeal
Yes I can do that but I would prefer if there was an iso which could just do that and I could share it with a new person in linux. And yes I could have the new person do the changes themselves but (why?), there really is no reason perhaps imo and this just feels like a low hanging fruit which nobody touched perhaps and so this is why I was curious too.
But also as the other comment pointed out, I feel like sure we can do this thing, but that there is definitely a genuine reason why we can probably create this thing itself as well and they give some good reasons as well and I agree with them overall too.
Like if you ask me, it would be fun to have more options especially considering this is linux where freedom is celebrated :p
Crazy how, thanks to Wine/Proton, Linux is now more compatible with old Windows games than Windows itself. There are a lot of games from the 90s and even the 00s that require jumping through a lot of hoops to run on Windows, but through Steam they're click-to-play on Linux.
My gaming PC isn't compatible with windows 11, so it was the first to get upgraded to Linux. Immediate and significant improvement in experience.
Windows kept logging down the system trying to download a dozen different language versions of word (for which I didn't have a licence and didn't want regardless). Steam kept going into a crash restart cycle. Virus scanner was ... being difficult.
Everything just works on Linux except some games on proton have some sound issues that I still need to work out.
Sound (oss, alsa, pulseaudio, pipewire...), bluetooth, WiFi are eternal problematic Linux paper cuts.
As always It is Not Linux Fault, but it is Linux Problem.
It's one of the reasons why I moved to OSX + Linux virtual machine. I get the best of both worlds. Plus, the hardware quality of a 128GB unified RAM MacBookPro M4 Max is way beyond anything else in the market.
I think the situation has flipped in the past few years. Since Pipewire came out, I haven't had any problems with audio on Linux and I can dial the latency down to single-digit ms. Meanwhile, on Mac audio has gotten far worse, especially since Tahoe. The latency is tens of ms and I get crackling and skipping when there's high CPU usage.
Audio is still broken pretty regularly in davinci resolve on Linux. Sometimes I need to restart the application to make audio work. And I can’t record sound within resolve at all.
It doesn’t help that they only officially support rocky Linux. I use mint. I assume there’s some magic pipewire / alsa / pulseaudio commands I can run that would glue everything together properly. But I can’t figure it out. It just seems so complicated.
In some games I get a crackle in the audio which I don't get through any native application, nor some games run with proton. I don't know if that's what he means, but it hasn't bothered me enough to figure it out. I use bluetooth headphones anyway, I'm relatively insensitive to audio fidelity.
Linux sound is fine at least for me. The problem is running Windows games in proton. Sound will suddenly stop, then come back delayed. Apparently a known issue on some systems.
The problem is games over Wine/Proton doing weird things with the sound. Not the sound itself on modern Linux. Heck, I have less issues using audio stuff, or just changing the audio volume on Linux than on the crappy Windows.
It kinda works both ways, just yesterday I tried to play the Linux native version of 8bit.runner and it didn't work, I had to install the Windows (beta) version and run it through proton.
Funny story: I use Anki (the flashcard program), and I run it on my NixOS laptop. There is a NixOS/nixpkgs package for Anki. It doesn't work. You know how I run Anki, which has a native GNU/Linux version and even an actual nixpkgs package, on my GNU/Linux NixOS laptop? Yeah, I run AnkiDroid, the Android version, through Waydroid. Because the Android version works.
Anki seems to be a habitual offender, I was never able to install it reproducibly and in an obvious way on several distros and always ended up building it from source.
Pretty much all the Renderware based GTAs have issues these days that only community made patches can mitigate.
A recent example is that in San Andreas, the seaplane never spawns if you're running Windows 11 24H2 or newer. All of it due to a bug that's always been in the game, but only the recent changes in Windows caused it to show up. If anybody's interested, you can read the investigation on it here: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
The last time I tried to run Tachyon: The Fringe was Windows 10, and it failed. IIRC I could launch it and play, but there was a non-zero chance that a FMV cutscene would cause it to freeze.
I see there are guides on Steam forums on how to get it to run under Windows 11 [0], and they are quite involved for someone not overly familiar with computers outside of gaming.
Anything around DirectX 10 and older has issues with Windows, these days.
One more popular example is Grid 2, another is Morrowind. Both crash on launch, unless you tweak a lot of things, and even then it won't always succeed.
Need for Speed II: SE is "platinum" on Wine, and pretty much unable to be run at all on Windows 11.
Windows used to be half operating system, half preconfigured compatibility tweaks for all kinds of applications. That's how it kept its backwards compatibility.
Lemmings Revolutions. Apparently to run in something else that is not Windows 95/98/Me requires some unofficial .EXE patch that you could download from some shady website. The file is now nowehre to be found.
It's a great game, unfortunately right now I am not able to play it anymore :( even though I have the original CD.
2. What causes it (the issues that makes it such a challenge)
3. How it changed over the years, and its current state
4. Any serious attempts to resolve it
I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)
I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.
The kernel is stable, but all the system libraries needed to make a grapical application are not. Over the last 20 years, we've gone from GTK 2 to 4, X11 to Wayland, Qt 4 to 6, with compatibility breakages with each change. Building an unmodified 20 year old application from source is very likely to not work, running a 20 year old binary even less so.
Linux API/ABI doesn't cover the entire spectrum that Windows API covers. There is everything from lowest level kernel stuff to the desktop environment and beyond. In Linux deployments, that's achieved by a mix of different libraries from different developers and these change over time.
The model of patching+recompiling the world for every OS release is a terrible hack that devs hate and that users hate. 99% of all people hate it because it's a crap model. Devs hate middlemen who silently fuck up their software and leave upstream with the mess, users hate being restricted to whatever software was cool and current two years ago. If they use a rolling distro, they hate the constant brokenness that comes with it. Of the 1% of people who don't hate this situation 99% of those merely tolerate it, and the rest are Debian developers who are blinded by ideology and sunk costs.
Good operating systems should:
1. Allow users to obtain software from anywhere.
2. Execute all programs that were written for previous versions reliably.
3. Not insert themselves as middlemen into user/developer transactions.
Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.
The answers to your questions are:
(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.
(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.
(3) It hasn't change and it's still bad.
(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.
If it is made to allow C codes to be combined with VB6 codes easily, and a FOSS version of VB6 (and the other components it might use) is made available on ReactOS (and Wine, and it would also run on Windows as well), then it might be better than using web technologies (and is probably better is a lot of ways). (There are still many problems with it, although it would avoid many problems too.)
Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Yes, you can build cross-platform GUI apps with Delphi. However, that requires using Firemonkey (FMX). If you build a GUI app using VCL on Delphi, it's limited to Windows. If you build an app with Lazarus and LCL, you CAN have it work cross-platform.
> Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Wait you can make Android applications with Golang without too much sorcery??
I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.
Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.
It's really price-y and I am not sure about if I could create applications for f-droid if they aren't open source and how it might go with something like remobjects.com/gold/
One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.
I started with VB6 so I'm sometimes nostalgic for it too but let's not kid ourselves.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
> and why all modern native UI frameworks have a similar model these days.
Personally I much rather the approach taken by solidjs / svelte.
React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.
The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.
Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.
The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.
If there was sufficient interest in it, most performance issues could be solved. Look at Python or Javascript, big companies have financial interest in it so they've poured an insane amount of capital into making them faster.
Being slower than other mainstream languages isn't really a problem in and of itself if it's fast enough to get the job done. Looking at all the ML and LLM work that's done in Python, I would say it is fast enough to get things done.
Only if I don't need to do anything beyond the built-in widgets and effects of Win32. If I need to do anything beyond that then I don't see me being more productive than if I were using a mature, well documented and actively maintained application runtime like the Web.
That's not really true. Even in the 90s there were large libraries of 3rd party widgets available for Windows that could be drag-and-dropped into VB, Delphi, and even the Visual C++ UI editor. For tasks running the gamut from 3D graphics to interfacing with custom hardware.
The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.
Whenever people bring this up I find it somewhat silly. Wine originally stood for "Windows Emulator". See old release notes ( https://lwn.net/1998/1112/wine981108.html ) for one example: "This is release 981108 of Wine, the MS Windows emulator." The name change was made for trademark and marketing reasons. The maintainers were concerned that if the project got good enough to frighten Microsoft, they might get sued for having "Windows" in the name. They also had to deal with confusion from people such as yourself who thought "emulation" automatically meant "software-based, interpreted emulation" and therefore that running stuff in Wine must have some significant performance penalty. Other Windows compatibility solutions like SoftWindows and Virtual PC used interpreted emulation and were slow as a result, so the Wine maintainers wanted to emphasize that Wine could run software just as quickly as the same computer running Windows.
Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.
> The late-90's-to-early-2010's PC desktop experience was great for power users, especially creative users. Let's keep the dream alive.
It sure was, if you were already bored by Windows 3.11/95 and were getting into Linux, it was fantastic. You were getting skills at the ground floor which could help keep you in good career for most of the rest of your life.
Cool. Having major distributions default to using binfmt_misc to register Wine for PE executables (EXE files) would be nice though. Next steps would obviously be for Windows apps to have their own OS-level identity, confined and permissioned per app using normal Linux security mechanisms, run against a reproducible and pinned Wine runtime with clearly managed state, integrated with the desktop as normal applications (launching, file associations, icons), and produce per-app logs and crash information, so they can be operated and managed like native programs. We have AI now, this should not be rocket science or require major investments. Only viable way Linux is replacing Windows.
I'm back to running Windows because of the shifting sands of Python and WxWindows that broke WikidPad, my personal wiki. The .exe from 2012 still works perfectly though, so I migrated back from Ubuntu to be able to use it without hassle.
It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.
Yea! I love the spirit. Compatibility in computing is consternating. If my code is compiled for CPU Arch X, the OS should just provide it with (using Rust terminology) standard library tools (networking, file system, and allocator etc) , de-conflict it with other programs, and get out of the way. The barriers between OSes, including between various linux dependencies feels like a problem we (idealistically thinking) shouldn't have.
I like this idea and know at least a few who would love to use this if you can solve for the:
'unfortunate rough edges that people only tolerate because they use WINE as a last resort'
Whether those rough edges will ever be ironed out is a matter I'll leave to other people. But I love that someone is attempting this just because of the tenacity it shows. This reminds me of projects like asahi and cosmopolitan c.
Now if we're to do something to actually solve for Gnu/Linux Desktops not having a stable ABI I think one solution would be to make a compatibility layer like Wine's but using Ubuntu's ABIs. Then as long as the app runs on supported Ubuntu releases it will run on a system with this layer. I just hope it wouldn't be a buggy mess like flatpak is.
This is a really cool idea. My only gripe is that Win32 is necessarily built on x86. AArch64/ARM is up and coming, and other architectures may arise in the future.
Perhaps that could be mitigated if someone could come up with an awesome OSS machine code translation layer like Apple's Rosetta.
There's not much x86 specific about Win32 and you can make native ARM Windows programs for years already. WinNT was designed to be portable from the start. Windows/ARM comes with a Rosetta like system and can run Intel binaries out of the box.
It still is if you're an enterprise customer. The retail users aren't Microsoft's cash cows, so they get ads and BS in their editions. The underlying APIs are still stable and MS provides the LTSC & Server editions to businesses which lack all that retail cruft.
I'm an enterprise user and I find Windows 11 a complete disaster. They've managed to make something as trivial as right-clicking a slow operation.
I used to be a pretty happy Windows camper (I even got through Me without much complaint), but I'm so glad I moved to Linux and KDE for my private desktops before 11 hit.
The problem with Windows after Windows 7 isn't really ads, it's the blatant stupid use of web view to do the most mundane things and hog hundreds of MB or even GBs for silly features, that are still present in enterprise versions.
Yes. Enterprise, Pro, and Home are the enshittified, retail editions. Enterprise just adds a few more features IIRC but still has ads. The other versions I mentioned above don't have any of that.
Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?
Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.
UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.
The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.
And that's why MS threw Windows away. It simply isn't a valuable asset anymore.
Idk why they use Electron for everything, they literally built the UI stack itself and C# is insanely good at building UIs if they stop trying to reinvent UIs in C# that is.
It's quite common for a company to build a good product and then once the initial wave of ICs and management moves on, the next waves of employees either don't understand what they're maintaining or simply don't care because they see a chance to extract short term gains from the built-up intellectual capital others generated.
Is this really the case? I feel like most windows users just bought a laptop with Windows already on it. Even if all home users were running pirated versions they would still become entrenched in the world of Windows/Office which would then lead to enterprise sales.
I think Linux is the better choice for replacing the entire userland. From what I've seen, the BSDs don't have such an accessible userspace/kernelspace split. With some effort, on Linux you could probably just run an exe as your init.
Thing is, I want the opposite. I want the NT/2k/w7 kernel and XFCE on top. NT kernel is infinitely better designed and has much better support on latest intel/amd hardware than Linux. And XFCE is much better than modern windows ui.
The difference between Win32 and Linux is that the latter didn't realize an operating system is more than a kernel and a number of libraries and systems glued together, but is, indeed, a stable ABI (even for kernel modules -- so old drivers will be usable forever), a default, unique and stable API for user interface, audio, ..., and so forth. Linux failed completely not technologically, but to understand what an OS is from the POV of a product.
Linux didn't aim to be an OS in the consumer sense (it is entirely an OS in an academic sense - in scientific literature OS == kernel, nothing else).The "consumer" OS is GNU/Linux or Android/Linux.
Depends on what task you're doing, and to a certain extent how you prefer to do it. For example sure there's plenty of ways to tag/rename media files, but I've yet to find something that matches the power of Mp3tag in a GUI under linux.
Well, not having Proton definitely didn't work to grow gaming on Linux.
Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.
I use some cool ham radio software, a couple SDR applications, and a lithophane generator for my 3d printer. It all works great, if you have a cool utility or piece of software, why wouldn't you want to?
For making music as much as I love the free audio ecosystem there's some very unique audio plugins with specific sounds that will never be ported. Thankfully bridging with wine works fairly well nowadays.
This will never work, because it isn't a radical enough departure from Linux.
Linux occupies the bottom of a well in the cartesian space. Any deviation is an uphill battle. You'll die trying to reach escape velocity.
The forcing factors that pull you back down:
1. Battles-testedness. The mainstream Linux distros just have more eyeballs on them. That means your WINE-first distro (which I'll call "Lindows" in honor of the dead OS from 2003) will have bugs that make people consider abandoning the dream and going back to Gnome Fedora.
2. Cool factor. Nobody wants to open up their riced-out Linux laptop in class and have their classmate look over and go "yo this n** running windows 85!" (So, you're going to have to port XMonad to WINE. I don't make the rules!)
3. Kernel churn. People will want to run this thing on their brand-new gaming laptop. That likely means they'll need a recent kernel. And while they "never break userspace" in theory, in practice you'll need a new set of drivers and MESA and other add-ons that WILL breaks things. Especially things like 3D apps running through WINE (not to mention audio). Google can throw engineers at the problem of keeping Chromium working across graphics stacks. But can you?
If you could plant your flag in the dirt and say "we fork here" and make a radical left turn from mainline Linux, and get a cohort of kernel devs and app developers to follow you, you'd have a chance.
While true, people should pay attention that WinRT, the technology infrastructure for UWP, nowadays lives in Win32 and is what is powering anything CoPilot+ PC, Windows ML, the Windows Terminal rewrite, new Explorer extensions, updated context menu on Windows 11,....
It is a moving target, Proton is mostly stuck on Windows XP world, before most new APIs started being a mix of COM and WinRT.
Even if that isn't the case, almost no company would bother with GNU/Linux to develop with Win32, instead of Windows, Visual Studio, business as usual.
This is amusing but infeasible in practice because it would need to be behaviorally compatible with Windows, including all bugs along with app compatibility mitigations. Might as well just use Windows at that point.
WINE has been reimplementing the Win32 ABI (not API) for decades. It already works pretty well; development has been driven by both volunteers and commercial developers (CodeWeavers) for a long time.
There are many programs that still do not work properly in WINE, even though it has been developed for decades. This in itself demonstrates the infeasibility of reimplementing Win32 as a stable interface on par with Windows. The result after all this effort is still patchy and incomplete.
Stable interfaces and not being in versioning hell (cough libc) would actually be good for FOSS as well.
If you make a piece of software today and want to package it for Linux its an absolute mess. I mean, look at flatpack or docker, a common solution for this is to ship your own userspace, thats just insane.
Agreed... I'm kind of a fan of AppImage/Flatpak/Snap (less Snap, but still)... even then, I don't use a lot of apps, and most of my variety is usually via Docker.
It's much more bloated than it should be, but the best way to reliably run old/new software in any given Linux.
Free software can still benefit from a stable ABI. If I want to run the software, it's better to download it in a format my CPU can understand, rather than download source, figure out the dependencies, wait for compiling (let's say it's a large project like Firefox or Chromium that takes hours to compile), and so on.
> If I want to run the software, it's better to download it in a format my CPU can understand, rather than download source, figure out the dependencies, wait for compiling (let's say it's a large project like Firefox or Chromium that takes hours to compile), and so on.
If its a choice between downloading a binary that depends on a stable ABI and compiling the source. They way most Linux software gets installed is downloading a binary that has been compiled for your OS version (from repos), and the next most common way of installing is compiling source through a system that figures out the dependencies for you (source based distros and repos).
We exist in a world where proprietary software exists, and always will exist. I want to be able to run said software if it's the best tool for the job, not be hobbled by an idealistic stance of "all software should be free so we don't bother to support proprietary software".
Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.
I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.
(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)
This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.
> makes me wonder how exactly Windows containers work
I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.
Honestly I might buy a T-shirt with such a quote.
I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem
AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks
AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps.
At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).
* https://github.com/warptools/ldshim
Looks like you met the right guy because I have built this tool :)
Allow me to show my project, Appseed (https://nanotimestamps.org/appseed): It's a simple fish script which I had (prototyped with Claude) some 8-10 months ago I guess to solve exactly this.
I have a youtube video in the website and the repository is open source on github too.
So this actually worked fantastic for a lot of different binaries that I tested it on and I had uploaded it on hackernews as well but nobody really responded, perhaps this might change it :p
Now what appseed does is that you can think of it is that it can take a binary and convert it into two folders (one is the dynamic library part) and the other is the binary itself
So you can then use something like tar to package it up and run it anywhere. I can of course create it into a single elf-64 as well but I wanted to make it more flexible so that we can have more dynamic library like or perhaps caching or just some other ideas and this made things simple for me too
Ldshim is really good idea too although I think I am unable to understand it for the time being but I will try to understand it I suppose. I would really appreciate it if you can tell me more about Ldshim! Perhaps take a look at Appseed too and I think that there might be some similarities except I tried to just create a fish script which can just convert any dynamic binary usually into a static one of sorts
I just want more people to take ideas like appseed or zapp's and run with it to make linux's ecosystem better man. Because I just prototyped it with LLM's to see if it was possible or not since I don't have much expertise in the area. So I can only imagine what can be possible if people who have expertise do something about it and this was why I shared it originally/created it I guess.
Let me know if you are interested in discussing anything about appseed. My memory's a little rusty about how it worked but I would love to talk about it if I can be of any help :p
Have a nice new year man! :p
The flatpak ecosystem is problematic in that most packages are granted too much rights by default.
You can still get firefox as a .deb though.
https://launchpad.net/~mozillateam/+archive/ubuntu/ppa
It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on.
I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh.
We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.
Who needs ABI compatibility when your software is OSS? You only need API compatibility at that point.
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.
Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;
"I followed your instructions and it doesn't run".
Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.
Distribution as source fails because there are too many unknown, and dependent parts.
Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.
I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.
People don’t seem to mind downloading a 30mb executable, so long as it actually works.
To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as: I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced anI wish either of those systems had the same hardware & software support. I’d swap my desktop over in a heartbeat if I could.
Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.
glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.
I might seriously recommend it to newbies and like there is just this love I have for windows 7 even though I really didn't use it for much but its so much more elegant in its own way than windows 10
like it can be a really fun experiment and I would be interested to see how that would pan out.
Rough approximations have been possible since the early 2000s, but they’re exactly that: rough approximations. Details matter, and when I boot up an old XP/7 box there are aspects in which they feel more polished and… I don’t know, finished? Complete? Compared to even the big popular DEs like KDE.
Building a DE explicitly as a clone of a specific fixed environment would also do wonders to prevent feature creep and encourage focus on fixing bugs and optimization instead of bells and whistles, which is something that modern software across the board could use an Everest sized helping of.
I think one of the friction could be ideological if not than anything since most linux'ers love Open source and hate windows so they might not want to build anything which even replicates the UI perhaps
Listen I hate windows just as much as the other guy but gotta give props that I feel nostalgic to windows 7, and if they provide both .exe perfect support and linux binary perfect support, things can be really good. I hope somebody does it and perhaps even adds it to loss32, would be an interesting update.
Pro tip but if someone wants to create their own iso as well, they can probably just customize things imperatively in MxLinux even by just booting them up in your ram and then they have the magnificient option of basically snapshotting it and converting that into an iso so its definitely possible to create an iso tweaked down to your configuration without any hassle (trust me but its the best way to create iso's without too much hassle and if one wants hassle, nix or bootc seems to be the way to go)
Regarding Why it wouldn't hit. I don't know, I already build some of my own iso's and I can build one for windows (on MxLinux principle) and upload it for free on huggingface perhaps but the idea is of mass appeal
Yes I can do that but I would prefer if there was an iso which could just do that and I could share it with a new person in linux. And yes I could have the new person do the changes themselves but (why?), there really is no reason perhaps imo and this just feels like a low hanging fruit which nobody touched perhaps and so this is why I was curious too.
But also as the other comment pointed out, I feel like sure we can do this thing, but that there is definitely a genuine reason why we can probably create this thing itself as well and they give some good reasons as well and I agree with them overall too.
Like if you ask me, it would be fun to have more options especially considering this is linux where freedom is celebrated :p
Windows kept logging down the system trying to download a dozen different language versions of word (for which I didn't have a licence and didn't want regardless). Steam kept going into a crash restart cycle. Virus scanner was ... being difficult.
Everything just works on Linux except some games on proton have some sound issues that I still need to work out.
Is this 1998? Linux is forever having sound issues. Why is sound so hard?
As always It is Not Linux Fault, but it is Linux Problem.
It's one of the reasons why I moved to OSX + Linux virtual machine. I get the best of both worlds. Plus, the hardware quality of a 128GB unified RAM MacBookPro M4 Max is way beyond anything else in the market.
It doesn’t help that they only officially support rocky Linux. I use mint. I assume there’s some magic pipewire / alsa / pulseaudio commands I can run that would glue everything together properly. But I can’t figure it out. It just seems so complicated.
What are some examples?
A recent example is that in San Andreas, the seaplane never spawns if you're running Windows 11 24H2 or newer. All of it due to a bug that's always been in the game, but only the recent changes in Windows caused it to show up. If anybody's interested, you can read the investigation on it here: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
I see there are guides on Steam forums on how to get it to run under Windows 11 [0], and they are quite involved for someone not overly familiar with computers outside of gaming.
0: https://steamcommunity.com/sharedfiles/filedetails/?id=29344...
One more popular example is Grid 2, another is Morrowind. Both crash on launch, unless you tweak a lot of things, and even then it won't always succeed.
Need for Speed II: SE is "platinum" on Wine, and pretty much unable to be run at all on Windows 11.
[0] https://learn.microsoft.com/en-us/windows/win32/direct3darti...
It's a great game, unfortunately right now I am not able to play it anymore :( even though I have the original CD.
Unfortunately, Wine is of no help here :(
Also original Commandos games.
1. The exact problem with the Linux ABI
2. What causes it (the issues that makes it such a challenge)
3. How it changed over the years, and its current state
4. Any serious attempts to resolve it
I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)
I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.
My understanding is that very old statically linked Linux images still run today because paraphrasing Linus: "we don't break user space".
Also, if you happened to have linked that image to a.out it wouldn't work if you're using a kernel from this year, but that's probably not the case ;)
The kernel doesn't break user space. User space breaks on its own.
Good operating systems should:
1. Allow users to obtain software from anywhere.
2. Execute all programs that were written for previous versions reliably.
3. Not insert themselves as middlemen into user/developer transactions.
Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.
The answers to your questions are:
(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.
(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.
(3) It hasn't change and it's still bad.
(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.
Never happens for me on Arch, which I've run as my primary desktop for 15 years.
Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Wait you can make Android applications with Golang without too much sorcery??
I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.
Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.
Why don't you try it out: https://www.remobjects.com/elements/gold/
One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
Personally I much rather the approach taken by solidjs / svelte.
React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.
The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.
Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.
The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.
But if you liked that, consider that C# was in many ways a spiritual successor to Delphi, and MS still supports native GUI development with it.
The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.
Maybe one day something like Lazarus or Avalonia would catch up but today I feel that Electron is best at what it does.
Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.
It sure was, if you were already bored by Windows 3.11/95 and were getting into Linux, it was fantastic. You were getting skills at the ground floor which could help keep you in good career for most of the rest of your life.
This is something that is very much needed to make Linux much more user friendly for new users.
It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.
Again?
https://blog.hiler.eu/win32-the-only-stable-abi/
https://news.ycombinator.com/item?id=32471624
'unfortunate rough edges that people only tolerate because they use WINE as a last resort'
Whether those rough edges will ever be ironed out is a matter I'll leave to other people. But I love that someone is attempting this just because of the tenacity it shows. This reminds me of projects like asahi and cosmopolitan c.
Now if we're to do something to actually solve for Gnu/Linux Desktops not having a stable ABI I think one solution would be to make a compatibility layer like Wine's but using Ubuntu's ABIs. Then as long as the app runs on supported Ubuntu releases it will run on a system with this layer. I just hope it wouldn't be a buggy mess like flatpak is.
I wanted to be nice and entered a genuine Windows key still in my laptop's firmware somewhere.
As a thank you Microsoft pulled dozens of the features out of my OS, including remote desktop.
As soon as these latest FSR drivers are ported over I will swap to Linux. What a racket, lol.
Perhaps that could be mitigated if someone could come up with an awesome OSS machine code translation layer like Apple's Rosetta.
I used to be a pretty happy Windows camper (I even got through Me without much complaint), but I'm so glad I moved to Linux and KDE for my private desktops before 11 hit.
Things started going downhill after that.
But you can use group policy etc. freely. I don't know how Win 11 is though
Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?
Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.
UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.
The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.
And that's why MS threw Windows away. It simply isn't a valuable asset anymore.
The answer to maintaining a highly functional and stable OS is piles and piles of backwards compatibility misery on the devs.
You want Windows 9? Sorry, some code checks the string for Windows 9 to determine if the OS is Windows 95 or 98.
This is largely true in North America, UK and AUS/NZ, less true in Europe, a mixed bag in the Middle East and mostly untrue everywhere else.
I might unironically use this. The Windows 2000 era desktop was light and practical.
I wonder how well it performs with modern high-resolution, high-dpi displays.
Just target Windows, business as usual, and let Valve do the hard work.
But they do test their Windows games on Linux now and fix issues as needed. I read that CDProjekt does that, at least.
How many game studios were bothering with native Linux clients before Proton became known?
That goes back to address the original question of "But would you want to run these Win32 software on Linux for daily use?"
Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.
This will never work, because it isn't a radical enough departure from Linux.
Linux occupies the bottom of a well in the cartesian space. Any deviation is an uphill battle. You'll die trying to reach escape velocity.
The forcing factors that pull you back down:
1. Battles-testedness. The mainstream Linux distros just have more eyeballs on them. That means your WINE-first distro (which I'll call "Lindows" in honor of the dead OS from 2003) will have bugs that make people consider abandoning the dream and going back to Gnome Fedora.
2. Cool factor. Nobody wants to open up their riced-out Linux laptop in class and have their classmate look over and go "yo this n** running windows 85!" (So, you're going to have to port XMonad to WINE. I don't make the rules!)
3. Kernel churn. People will want to run this thing on their brand-new gaming laptop. That likely means they'll need a recent kernel. And while they "never break userspace" in theory, in practice you'll need a new set of drivers and MESA and other add-ons that WILL breaks things. Especially things like 3D apps running through WINE (not to mention audio). Google can throw engineers at the problem of keeping Chromium working across graphics stacks. But can you?
If you could plant your flag in the dirt and say "we fork here" and make a radical left turn from mainline Linux, and get a cohort of kernel devs and app developers to follow you, you'd have a chance.
And failing everything else, Microsoft is in the position to put WSL center and front, and yet again, that is the laptops that normies will buy.
It's not a moving target. Proton and Wine have shown it can be achieved with greater comparability than even what Microsoft offers.
It is a moving target, Proton is mostly stuck on Windows XP world, before most new APIs started being a mix of COM and WinRT.
Even if that isn't the case, almost no company would bother with GNU/Linux to develop with Win32, instead of Windows, Visual Studio, business as usual.
It's a start.
https://en.wikipedia.org/wiki/Loss_(Ctrl%2BAlt%2BDel)
Better to consider is the Proton verified count, which has been rocketing upwards.
https://www.protondb.com/
(That and Linux doesn't implement win32 and wine doesn't exclusively run on Linux.)
If you make a piece of software today and want to package it for Linux its an absolute mess. I mean, look at flatpack or docker, a common solution for this is to ship your own userspace, thats just insane.
It's much more bloated than it should be, but the best way to reliably run old/new software in any given Linux.
If its a choice between downloading a binary that depends on a stable ABI and compiling the source. They way most Linux software gets installed is downloading a binary that has been compiled for your OS version (from repos), and the next most common way of installing is compiling source through a system that figures out the dependencies for you (source based distros and repos).