* there doesn't appear to be any migration from old directory to new directory. Does the code just use ~/.mozilla if it still exists, ~/.config/mozilla otherwise.. or does it _require_ MOZ_LEGACY_HOME=1 to be set to keep using your existing config, and just lose all config if you don't set that?
* there doesn't appear to be a proper split between ~/.cache (always-removable cached data) ~/.config (configuration) and ~/.local/share (application data that is not user-editable configuration and is not just cached data either), they just moved the entire set of profile stuff to ~/.config
Is that about right, or do I need to read the code more carefully?
> At last! Mozilla fixing longstanding bugs! (I jest)
you joke but they did just close out the initial implementation of a something like 27 year old bug. about:keyboard was recently added to nightly to allow you to change or clear the built in keyboard shortcuts of a bunch of menu items like save, back, refresh, or open dev tools or whatever.
From that diff it looks to me that if ~/.mozilla exists OR if MOZ_LEGACY_HOME is set it uses ~/.mozilla, otherwise it uses the $XDG_CONFIG_HOME/.mozilla directory instead.
So no migration to the XDG directory, but also no throwing away your existing data either.
I know a few apps that did the same (mpv for example). If you still have it in home root it uses that, when you move it to .config it uses that instead. Auto migrating could and would create issues.
Looking at the full diff[0] it certainly looks like it's using ~/.cache (and has been for some time), but I cannot see anything about ~/.local/share, no.
Which means my .config directory, which is under backup, is gonna be spammed with temporary and cache files. Though not XDG-compliant, at least ~/.mozilla was in place for decades and it’s already being excluded in my backup set on my machines.
Either they adopt XDG fully, putting cache files where they belong, or don’t just change things haphazardly for little benefit.
Not cache files if I understand correctly, they are using $HOME/.cache/mozilla for a long time already.
You can exclude $HOME/.config/mozilla from your back up all the same anyway if that causes you some issues.
I personally appreciate them not cluttering $HOME with this move. It is better than waiting another 21 years for them to support XDG spec fully by splitting share and config.
I think there is probably a lot of work to do to fully pry the .mozilla folder apart. For a long time they've simply shipped everything in that folder and rolled with it. Making decisions on what is actually cache and what is user config vs "application data" is probably going to be harder than splitting the folder.
That's true, but they've already done it for macOS... ~/Library/Application Support/Firefox/ (for both the config and non-config data) versus ~/Library/Caches/Firefox/ (for cached data that can always be deleted)
Requiring a mozconfig file shows that the code base has failed to transition to cmake or meson/ninja (directly; there is some python wrapper which may help here but I refer to the primary configuration). Mozilla gave up on Firefox a long time ago already.
Oh, I hadn't even _thought_ of that. Yeah, that's going to be a fun debate. Realistically, extensions shouldn't care about the folder structure of other parts of the profile, but I also know that there is a _lot_ of history there.
You may jest, but sadly, that was my first knee-jerk reaction to the headline, too. "Wow, Mozilla actually fixes Firefox bugs? Let's go!" This is how low the bar has gotten :(
This is a meaningful step! For years, XDG Base Directory compliance has been spotty across major applications. Firefox's adoption matters because it's widely used and its implementation may encourage others to follow suit.
The Arch Wiki documentation will likely need updates [1], but sadly the list of non-compliant software is far too long.
FWIW, the OpenSSH devs believe it to be a potential security risk to adopt XDG:
> Adding additional configuration paths is confusing and potentially risky
for .ssh as, quite unlike usual "desktop" apps, it grants system access and having its configuration smeared across several possible paths makes managing this more confusing and brittle.[1]
I think this is clearly true for something like ~/.ssh/authorized_keys; it is perhaps less true for ~/.ssh/config and or ~/.ssh/known_hosts which could go in XDG_CONFIG_HOME and XDG_DATA_HOME, but if part of the point of the XDG BDS is to reduce dotfiles in $HOME then it makes less sense to move some, but not all of those files.
I think most people are okay with software such as OpenSSH keeping its long-existing conventions. In the same way I don't think a lot of people mind ".bashrc" being where it is. It's manageable if there's just a few and they're well-known.
However this "exemption" does not and should not apply to anything newer. Things like Cargo, Snap, Steam, Jupyter, Ghidra, Gradle, none of those should be putting their stuff (especially temporary junk) directly and unsegmented into $HOME.
At some point I had more than 50 different dotfiles and dotfolders in my $HOME. It was unwieldy and nasty to look at. I couldn't even figure out what created some of those files because they were so generic.
Plain $HOME as the dumping ground simply does not scale beyond a select few.
This is great news. Firefox respects the system-defined folders on Windows and macOS. Linux, being the free spirit it is, doesn't have a 'standard'. XDG makes recommendations that make a certain amount of sense and aligning to that is a great step forward for such a large project.
The reason most software is not "XDG-compliant" is because most software predates the XDG basedir spec which only came into existence in 2021 (edit: oops, that's just version 0.8; version 0.6 was available in 2003)
It will be nice for software, as it updates, to support this standard which seems to be gaining adoption, and it will make users homedirs much cleaner. But it's most important for software to _keep working_, and have a migration path that doesn't lose the user's config or end up with two configs and not have a clear rule on which one it will use.
I think it is possible for software to keep working and I can think of many ways to implement automatic "migration", which is essentially just copying files to the new directory and then deleting the previous directory if the copy was successful[1], and if one wants, could create a compressed backup of the directory prior to doing that.
> which is essentially just copying files to the new directory and then deleting the previous directory if the copy was successful
And deleting the partially copied data if the copy wasn’t successful, and making sure “just copying files to the new directory” didn’t overwrite data, and probably a few more tricky scenarios, e.g. ones involving access rights.
Also, if you think it could be a directory rename, there are tricky corners there, too. How do you determine whether source and target are on the same disk, for example?
It _is_ possible, but doing it robustly is far from trivial.
You are listing edge cases that exist, but the relevant question is whether they meaningfully apply to Firefox profile migration on typical systems.
Same-disk detection can be done through stat() on both paths and comparing st_dev, which is trivial. But more importantly, why does this matter for migration? If it is cross-filesystem, copy and move works fine. If you are concerned about atomicity, that is a different problem, but Firefox profiles are not typically manipulated concurrently during a migration that happens once at startup.
Partial copy cleanup is reasonable, but again, context matters. For a one-time migration triggered at browser start with exclusive access to the profile, you verify checksums or sizes post-copy, and if verification fails, you do not delete the source. User gets an error, tries again later. Not complex.
As for overwrites: do not overwrite if target exists. Check once before starting. If the XDG path already has data, skip migration entirely or prompt. This is not a continuous sync operation.
FWIW "cp -a" preserves access rights on Unix. On Windows, ACLs can be trickier but for user-owned profiles it is usually non-issue.
The real complexity in robust file operations show up with network filesystems (SMB, NFS), concurrent access patterns, or where atomicity guarantees are critical (and a move operation is indeed atomic, assuming typical systems). For a single-user profile migration that happens once with exclusive lock? The corner cases you mentioned are either straightforward to handle or do not apply.
I don't like Unix filesystem structure in general. What's the point of having directories like /usr or /lib in the root directory, when they could be all under for example, /ubuntu24? And the user could keep files in the root directory and not in /home with lot of system files.
Also I don't like that some distributions suggest partitioning a drive. This is inconvenient, because you can run out of space at one partition, but have lot of free space at another. It simply doesn't make sense. And if you have swap as a partition, you get slightly faster access, but cannot change the size!
> you can run out of space at one partition, but have lot of free space at another
that's exactly the point — you can run out of space in your /home but that does not affect, for example, /var. or vice versa, log explosion in /var is contained within its own partition and does not clog the entire filesystem.
There are a lot of reasons. Just three from the top of my head:
1. The way Unix works, a directory is a file, so if you can write in a directory you'll also be able to move directories around (and thus break the structure you mentioned completely).
2. Doesn't make sense for multi-user. Yes, I understand most people have their own computers, but (1) why design it in a way that breaks multi-user unnecessarily? (2) there are a lot of utility users, and having them get access to user files because of the way this is structured is silly.
3. `grep -r` is going to be a pain in the ass when searching your own files, because it'll also search all the other system subdirectories too.
It’s just historical. Believe the large number of top level directories was a result of ken not having enough space on a single disk on his PDP, when that was precious.
For years I’ve been putting all user data into a separate /data partition and have kept the OS partition small (~30gb). But you have to fix the system when first installed. When I still used Windows I had the same c:/d: split.
More recently started putting kernels into a bigger ESP (EFI) partition with sdboot or uki.
With terabyte system disks, running out of space mostly doesn’t happen anymore unless you made the system partition(s) small. Don’t do that, give them plenty of GB, each of which are now thousandths of the disk.
I’m honestly having issues deciding if this is bait or not. Surely you understand that UNIX is a multi-user operating system and that partitioning drives exactly for the reason you describe is critical to ensure that, for example, runaway log growth doesn’t cause a database to shut down?
I think the XDG spec is pretty petty. What difference does it make that the files are in ~/.config/mozilla instead of ~/.mozilla? And calling it a bug is presumptuous.
One being that it's _my_ $HOME, not some random developers'. I literally had more than 50 different dotfiles and dotfolders in my $HOME at some point. It was a garbage dump and I couldn't even identify the culprit with some of them. Simply disrespectful.
Then there's the issue of cleaning up leftovers and stale cache files. It shouldn't take a custom script cleaning up after every special snowflake that decided to use some arbitrarily-named directory in $HOME.
Not following the spec also makes backing up vital application state much much harder.
In the end, I made my $HOME not writeable so I could instantly find out if some software wants to take a dump. It turns out it's often simply unnecessary as well, the software doesn't even care, just prints an error and continues.
The difference is that I don’t use standard XDG directories because I loathe dot-files, loathe hidden directories, and so I declare my own environment variables to put everything where I want.
Then Firefox (and ansible, and many others) comes barreling in dropping an unconfigurable dot-directory in my fucking home folder ignoring the perfectly good XDG variables I have set.
It is a constant struggle to stop my home folder from not feeling like my home. Developers ought to learn some fucking respect.
This, I set an alias for `adb` to use `"$XDG_DATA_HOME"/android` instead of `~/.android` because it stores the keys there for whatever reason. I would rather not see my home folder being cluttered with hidden files, it makes backing things up unnecessarily complex.
export ANDROID_USER_HOME="$XDG_DATA_HOME"/android
alias adb='HOME="$ANDROID_USER_HOME" adb'
Have you ever `ls -al ~/` on a heavily used unix system? Absolute rot and chaos. I have like 100 hidden directories+files in the root of my home directory. Some of them are caches, some are configs.
the main benefit (which even with this change, Firefox won't get) is the separation of configuration, cache files, binaries etc which sysadmins likely want completely different policies for. e.g. cache shouldn't be backed up, config shouldn't be executable etc
The devil is in the details though: https://hg-edge.mozilla.org/integration/autoland/diff/8a6d6c...
Looking briefly at this,
* there doesn't appear to be any migration from old directory to new directory. Does the code just use ~/.mozilla if it still exists, ~/.config/mozilla otherwise.. or does it _require_ MOZ_LEGACY_HOME=1 to be set to keep using your existing config, and just lose all config if you don't set that?
* there doesn't appear to be a proper split between ~/.cache (always-removable cached data) ~/.config (configuration) and ~/.local/share (application data that is not user-editable configuration and is not just cached data either), they just moved the entire set of profile stuff to ~/.config
Is that about right, or do I need to read the code more carefully?
you joke but they did just close out the initial implementation of a something like 27 year old bug. about:keyboard was recently added to nightly to allow you to change or clear the built in keyboard shortcuts of a bunch of menu items like save, back, refresh, or open dev tools or whatever.
So no migration to the XDG directory, but also no throwing away your existing data either.
Who knows what might be touching that data today. Or backing it up, etc
[0] https://hg-edge.mozilla.org/integration/autoland/rev/8a6d6c0...
Which already is a huge improvement and better than bikeshedding for decades that they also should use $HOME/.local/share/mozilla in addition.
Either they adopt XDG fully, putting cache files where they belong, or don’t just change things haphazardly for little benefit.
You can exclude $HOME/.config/mozilla from your back up all the same anyway if that causes you some issues.
I personally appreciate them not cluttering $HOME with this move. It is better than waiting another 21 years for them to support XDG spec fully by splitting share and config.
So, things change over time. The question is: is the codebase at Mozilla still "living" in that it can adjust or be adjusted?
https://www.linuxfromscratch.org/blfs/view/svn/xsoft/firefox...
Requiring a mozconfig file shows that the code base has failed to transition to cmake or meson/ninja (directly; there is some python wrapper which may help here but I refer to the primary configuration). Mozilla gave up on Firefox a long time ago already.
More of this, less AI-cramming, please!
[0] https://bugzilla.mozilla.org/show_bug.cgi?id=259356
The Arch Wiki documentation will likely need updates [1], but sadly the list of non-compliant software is far too long.
[1]: https://wiki.archlinux.org/title/XDG_Base_Directory
> Adding additional configuration paths is confusing and potentially risky for .ssh as, quite unlike usual "desktop" apps, it grants system access and having its configuration smeared across several possible paths makes managing this more confusing and brittle.[1]
I think this is clearly true for something like ~/.ssh/authorized_keys; it is perhaps less true for ~/.ssh/config and or ~/.ssh/known_hosts which could go in XDG_CONFIG_HOME and XDG_DATA_HOME, but if part of the point of the XDG BDS is to reduce dotfiles in $HOME then it makes less sense to move some, but not all of those files.
1: https://marc.info/?l=openssh-unix-dev&m=170687803731931&w=2
However this "exemption" does not and should not apply to anything newer. Things like Cargo, Snap, Steam, Jupyter, Ghidra, Gradle, none of those should be putting their stuff (especially temporary junk) directly and unsegmented into $HOME.
At some point I had more than 50 different dotfiles and dotfolders in my $HOME. It was unwieldy and nasty to look at. I couldn't even figure out what created some of those files because they were so generic.
Plain $HOME as the dumping ground simply does not scale beyond a select few.
The reason most software is not "XDG-compliant" is because most software predates the XDG basedir spec which only came into existence in 2021 (edit: oops, that's just version 0.8; version 0.6 was available in 2003)
It will be nice for software, as it updates, to support this standard which seems to be gaining adoption, and it will make users homedirs much cleaner. But it's most important for software to _keep working_, and have a migration path that doesn't lose the user's config or end up with two configs and not have a clear rule on which one it will use.
[1] Could implement a verification step as well.
And deleting the partially copied data if the copy wasn’t successful, and making sure “just copying files to the new directory” didn’t overwrite data, and probably a few more tricky scenarios, e.g. ones involving access rights.
Also, if you think it could be a directory rename, there are tricky corners there, too. How do you determine whether source and target are on the same disk, for example?
It _is_ possible, but doing it robustly is far from trivial.
Same-disk detection can be done through stat() on both paths and comparing st_dev, which is trivial. But more importantly, why does this matter for migration? If it is cross-filesystem, copy and move works fine. If you are concerned about atomicity, that is a different problem, but Firefox profiles are not typically manipulated concurrently during a migration that happens once at startup.
Partial copy cleanup is reasonable, but again, context matters. For a one-time migration triggered at browser start with exclusive access to the profile, you verify checksums or sizes post-copy, and if verification fails, you do not delete the source. User gets an error, tries again later. Not complex.
As for overwrites: do not overwrite if target exists. Check once before starting. If the XDG path already has data, skip migration entirely or prompt. This is not a continuous sync operation.
FWIW "cp -a" preserves access rights on Unix. On Windows, ACLs can be trickier but for user-owned profiles it is usually non-issue.
The real complexity in robust file operations show up with network filesystems (SMB, NFS), concurrent access patterns, or where atomicity guarantees are critical (and a move operation is indeed atomic, assuming typical systems). For a single-user profile migration that happens once with exclusive lock? The corner cases you mentioned are either straightforward to handle or do not apply.
There's lot less to migrate if you don't wait that long.
Firefox excel in terms of Multi Tab and memory usage. And I have yet to encounter a rendering issues in the past 12 months.
Also I don't like that some distributions suggest partitioning a drive. This is inconvenient, because you can run out of space at one partition, but have lot of free space at another. It simply doesn't make sense. And if you have swap as a partition, you get slightly faster access, but cannot change the size!
that's exactly the point — you can run out of space in your /home but that does not affect, for example, /var. or vice versa, log explosion in /var is contained within its own partition and does not clog the entire filesystem.
1. The way Unix works, a directory is a file, so if you can write in a directory you'll also be able to move directories around (and thus break the structure you mentioned completely).
2. Doesn't make sense for multi-user. Yes, I understand most people have their own computers, but (1) why design it in a way that breaks multi-user unnecessarily? (2) there are a lot of utility users, and having them get access to user files because of the way this is structured is silly.
3. `grep -r` is going to be a pain in the ass when searching your own files, because it'll also search all the other system subdirectories too.
For years I’ve been putting all user data into a separate /data partition and have kept the OS partition small (~30gb). But you have to fix the system when first installed. When I still used Windows I had the same c:/d: split.
More recently started putting kernels into a bigger ESP (EFI) partition with sdboot or uki.
With terabyte system disks, running out of space mostly doesn’t happen anymore unless you made the system partition(s) small. Don’t do that, give them plenty of GB, each of which are now thousandths of the disk.
Source: https://hg-edge.mozilla.org/integration/autoland/rev/8a6d6c0...
One being that it's _my_ $HOME, not some random developers'. I literally had more than 50 different dotfiles and dotfolders in my $HOME at some point. It was a garbage dump and I couldn't even identify the culprit with some of them. Simply disrespectful.
Then there's the issue of cleaning up leftovers and stale cache files. It shouldn't take a custom script cleaning up after every special snowflake that decided to use some arbitrarily-named directory in $HOME.
Not following the spec also makes backing up vital application state much much harder.
In the end, I made my $HOME not writeable so I could instantly find out if some software wants to take a dump. It turns out it's often simply unnecessary as well, the software doesn't even care, just prints an error and continues.
Then Firefox (and ansible, and many others) comes barreling in dropping an unconfigurable dot-directory in my fucking home folder ignoring the perfectly good XDG variables I have set.
It is a constant struggle to stop my home folder from not feeling like my home. Developers ought to learn some fucking respect.