Mandos works with initramfs images created by both initramfs-tools and Dracut, and is present in Debian since 2011, so no need to use a third-party package.
The kind of tool that I'm waiting for would allow me to log into a machine remotely, then reboot it once into a provided image (which could then run Linux in a ramdisk). I.e., take over the entire machine until it is restarted again. Does something like this exist?
I want something like "kexec" but taking over the entire hardware at the lowest possible level.
Despite the “tang/clevis” system being too clever by half (and even more complex), it is essentially very similar. Mandos, though, is much simpler; you can actually understand it, since it uses standard components and protocols like DNS-SD¹, OpenPGP and TLS. (Mandos is documented in man pages, in contrast to page you linked, which is both huge, and – for me at least – keeps reloading itself and replacing the entire page with a 500 server error.)
Mandos is also for Debian (and all Debian-based systems), and not Red Hat (although there is nothing preventing a port, since Mandos now also supports initramfs images created by Dracut).
Finally, Mandos was initially created in in 2007, many years before tang/clevis, and literally by a person in a dorm room, not by whatever academics/scientists who seem to have dreamed up the cryptography/protocol tangle that is tang/clevis. Mandos has changed some since then, from initially using broadcast UDP and X.509 certificates, to using DNS-SD and raw public keys², but has otherwise remained very similar to its initial design.
I think FOSDEM had a talk a few years ago about that solution and their tang server. It is very similar in concept. They use McCallum-Relyea exchange, and we use PGP encryption, but the basics are the same in that you need information stored at the server and information stored in the unencrypted initramfs in order to be combined and turned into the encryption key. If my understanding of the McCallum-Relyea exchange is correct, they also combine the key creation and perfect forward security of the transaction into a single protocol, while we use TLS and client keys for the perfect forward security aspect and PGP for data at rest.
With tang you need to verify the hash of the tang server in order to prevent a MITM threat from extracting the server side part during the key creation process, while with Mandos you configure it manually with configuration files.
Tang uses REST and JSON. Mandos sends a single protocol number and then sends the data. Both operate primarily through local LAN, through Mandos client also support as an option to use global reachable ip addresses.
Outside of those design differences, there are some packaging aspects. Tang was designed for Redhat/Dracut, while Mandos was designed for Debian and initramfs-tools (Today Mandos also support Dracut, but it is not packaged for Redhat). Redhat packaging has been requested multiple times for Mandos, but neither of us two developer are Redhat users.
To add some historical context, I recall a Debconf BoF by the developers of initramfs-tools about if they were going to continue developing initramfs-tools or giving up and port everything to Dracut, and the silence was fairly deafening. People did not want to give up on initramfs-tools, but everyone recognized the massive duplication that those two project are. Similarly, the reason why Redhat had chosen initially to develop Dracut rather than just port initramfs-tools is also fairly big mystery, and is generally considered a Not Invented Here syndrome. Today there are however some distinct difference in design between the two systems.
clevis and tang do currently work seamlessly on Debian and Ubuntu using initramfs-tools. So while the initramfs-tools/dracut discussion is valid, it seems mostly orthogonal to this topic.
I was unaware that they no longer depended on Dracut and now support initramfs-tools, which also seem to be the earliest clevis version that got packaged in Debian. That makes the initramfs-tools/dracut distinction a historical aspect of the project.
Why is this needed at all? As the decrypted key is in memory before the reboot, can’t it just be written to a know location in memory and have kexec be instructed to read it early on?
> As the decrypted key is in memory before the reboot, can’t it just be written to a know location in memory and have kexec be instructed to read it early on?
I set up what you are suggesting (sort of anyway[1]) on a personal VPS to reboot after updates, that require one. I just generate an initrd in tmpfs that contains a keyfile[2] and kexec with that ephemeral initrd; The newest kernel can be found by looking at what the /boot/vmlinuz symlink points to. Been running this for years. It is 100% reliable, and simple. And, for the purposes of this box, secure enough.
For remote unlocks from initial power on, Debian has had that since forever using keyscripts and dropbear in the initrd.
[1] You could pull the key from memory, and use that to unlock the disk from within the generated initrd, but it would be more work than just setting up a keyfile in advance. It was my first thought as well.
[2] Easiest way was to use a mount namespace to use a diff crypttab file that points to the keyfile, since cannot specify crypttab location when creating the initrd. E.g.,
Oh for sure something is needed for a full start from zero. But the much more common case for a computer with backup power is regular restarts after applying patches that require a reboot. Would be much more pleasant for that to work out of the box with no manual interaction at all.
Good FAQ, clearly stating the weak point of physical access. For a server that threatmodel can work, for a fleet of edge/iot devices in unsecured locations without permanent uptime there is no real solution to be expected without custom silicon logic (like in smartcards) on the soc.
That only works with RAID 1. If the server uses RAID 5 or RAID 6, this won’t work.
> extract what you need
Well, yes. This is addressed in the FAQ.
> or change the image.
> Then you turn off the server, and just start a vm with the captured init and capture the key.
Well, as explained in the FAQ, an attacker will have to do so quickly, before the Mandos server decides that the Mandos client has been offline for too long, and disables that client. The default value is five minutes, but is configurable per client.
Ok, that is assuming /boot is ON the raid which I wouldn't want to rate for probability
But even if it is, you could just pull one after the other and wait for the resilver before pulling the next one (you will hear if it resilvers automatically)
This doesn't work with secure boot and UKIs, since the entire "pre-rootfs switch" is signed in a single binary. If your threat model is what you have that is the least you should have.
Can't I just extract the key from uefi/tmp in this case?
Not that it's easy, but with the right tools you can so it offline with all the time in the world
The whole architecure of Mandos is very plugin-based, it will therefore likely not be hard to add. But I am not sure what you are asking for? The Mandos server will, by default, unlock all clients, without asking. There is support for not unlocking immediately and instead wait for external approval for clients explicitly so configured, but what is the scenario? Is the client the one supplying the passkeys/webauthn? Or are you providing that manually on a web page somewhere? The latter is possible; the web page server process would then, when a passkey/webauthn has been verified, send a D-Bus “Approve” message to the Mandos server process, which would then send the client its password.
I have a very similar setup to the author, but instead of running Tailscale in my initramfs, I have a Raspberry Pi sitting next to the home server (which is on my Tailscale network) and I use it like a bastion host. Process is something like:
1. SSH into the Pi
2. Issue the Wake-on-LAN packet to boot the server
3. Tunnel that server's SSH port to my laptop
4. SSH into the initramfs SSH server (I use TinySSH), enter the decryption key
5. Wait for server to come up, then access via Tailscale
This is more complicated than the author's setup in that it requires another device (the Pi), but it's simpler in that you don't need to have the initramfs on your Tailnet.
It's not only more complicated, it also does not sound to me like it would scale. What do you do when you have N servers? Do you buy N raspis, or do you keep using one bastion host? How do you automate it when you sooner or later must (re-)deploy?
If you set this up once ("this" meaning adding networking, SSH and tailscale inside initramfs), you can just do the same thing for the next server you set up, and you don't have to worry about the failure of one node affecting the other(s).
The approach I've outlined scales fine to N servers, it just doesn't work if they're on different networks.
But scaling also isn't really a parameter I (or the author) are optimizing for: we have a single beefy server we do all our work on, and a thin laptop client we want to access the server from, remotely and booting an encrypted root partition.
I don't necessarily understand the deployment question. If it's about the Raspberry Pi, I just do my updates when I don't need to use it to boot the server.
Most Linux distros are not Arch either. It would be nice to have more support for this use case in general - like something one can configure easily during the initial OS setup.
I use OpenSuse so I had to use the guide for Fedora, but there were some differences as far as I remember.
I have a setup based on this, but I modified it to encrypt the SSH host key using the TPM. That way, I can detect a MiTM from an attacker who has stolen the drive or modified the boot policy because host key verification will fail.
That encrypts the SSH host key using a password sealed with PCR7, which is invalidated if an attacker disables Secure Boot or tampers with the enrolled keys. Thus, an attacker can't extract the key from the drive or by modifying the kernel command line to boot to a shell (since that's not allowed without disabling secure boot).
It's still probably vulnerable to a cold boot attack, since the key is decrypted CPU-side. It would be interesting to perform the actual key operations on the TPM itself to prevent this.
A long time ago, I built my own crashcart adapter with a raspberry pi and a teensy to do something similar. I would sometimes get weird mdadm errors that would hang the boot process and other times, a reboot or power loss wouldn't actually cause the PC to boot back up. The teensy did USB HID emulation for keyboard inputs. I added the ability to push the power button with a fet and some resistors. I had a cheap VGA to composite adapter going into a USB composite capture device so I could at least get screenshots for any weird boot messages. I built a small webpage using flask to display the screenshot, allow for text input, control inputs, and to push the power button. It was a lot of fun building but a basement flood completely wrecked it. Server was sitting on a 6in platform but the crashcart had fallen off the top of the case and was laying on the ground. Oops.
Glad to see another example of this! Remote unlocking of your personal server's encrypted hard drive is PITA.
Other options that I've investigated that involve having a second server:
* A second server with Tang, and Clevis in the initramfs OS
* Keylime
Putting tailscale in the initramfs, and then updating the certs on a frequent enough schedule, seems risky to me. I've already played around with limine enough that I know I don't want to install much in the initramfs...
TPM is probably the best solution here. The key can be automatically fetched on reboot unless the boot order is changed or the drive is put in another computer.
Realistically for a home server what you are worried about is someone breaking in and selling your drives on Facebook marketplace rather than the FBI raiding your nextcloud server. So TPM automated unlock is perfectly sufficient.
> Realistically for a home server what you are worried about is someone breaking in and selling your drives on Facebook marketplace
If someone steals the entire machine, the drives will unlock themselves automatically. I don't think it's worth the risk to assume a hypothetical thief is too lazy to check if there's any valuable data on the disks. At the very least, they'll probably check for crypto wallets.
With something like Clevis and Tang, you can set it up so it only auto unlocks while connected to your home network, or do something more complex as needed
The hope with the TPM is that the system boots to a standard login screen, and the thief doesn't know any user's password. Much like someone snatching a laptop that's in 'suspend' mode.
Of course, a thief could try to bypass the login screen by e.g. booting with a different kernel command line, or a different initramfs. If you want to avoid this vulnerability, TPM unlock can be configured as a very fragile house of cards - the tiniest change and it falls down. The jargon for this is "binding to PCRs"
TPM is good when combined with secureboot and these hashes being part of the attestation, that eliminates initramfs swapping.
Still with Physical access being a factor bustapping can happen, ftpm - if available - is much harder to crack then than a discrete module.
The fallback is you have to manually unlock the drive, the same as you did without a TPM. But the benefit is while things remain unchanged, the system can reboot itself.
You can reduce the frequency with which things change by adding an additional layer before the "real" kernel is loaded. A minimal image that does nothing but unlock any relevant secrets, verify the signature of the next image, and then hands off control.
They will unlock in to a password protected system. Unless the junkie who stole your server has an unpatched debian login bug, this won't be much use to them. If they remove the drive or attempt to boot off a USB, the drive is unreadable.
What's the difference when booting off a USB drive? That's been my goto in the past when I forgot my login password; does the TPM only unlock boot devices?
Generally you'll have your drive only unlock against certain PCRs and their values. It depends on which PCRs you select and then how exactly they are measured.
E.g. systemd measures basically everything that is part of the boot process (kernel, kernel cli, initrd, ...[1]) into different PCRs, so if any of those are different they result in differen PCR values and won't unlock the boot device (depending on which PCRs you decided to encrypt against). I forgot what excatly it measures, but I remember that some PCRs also get measured during the switch_root operation from initrd -> rootfs which can be used to make something only unlock in the initrd.
The TPM holds the decryption keys and will unlock as long as all checks pass. Booting off the previously registered drive/kernel being one of them.
If this fails you can always manually input the decryption key and reregister with the TPM. The whole point of this setup is you can't just use a bootable USB to reset the devices password.
If properly configured and the TPM implementation is good, no it shouldn't unlock the drive. Changing boot devices, and depending on how configured even changing boot options, can prevent the TPM from releasing the key and require a recovery key.
FYI your decryption key can be MITMed during this process by anyone with physical access to the system, which defeats the purpose of encrypting the disk in the first place.
dm-verity only verifies block integrity and the boot chain and does not provide confidentiality or a secure remote key exchange, so someone with physical access can still MITM or tamper with the initramfs and capture a LUKS passphrase during a network unlock.
If confidentiality during remote unlock matters, seal the LUKS key in TPM2 tied to PCR values using systemd-cryptenroll or use Clevis with Tang over TLS with strict server certificate pinning, accept the operational cost of re-sealing keys after kernel or firmware updates, and keep an offline recovery key because trusting the local console is asking for trouble.
Police show up and arrest you. Could be with reason, could be by accident. Maybe you did something wrong, maybe you didn’t. They also physically size your servers, and in doing so they unplug the system.
If you have disk encryption, your data now requires the police to force you to produce a password, which may or may not be within their powers, depending on the jurisdiction.
It’s strictly better to have full disk encryption and remote unlocking than no disk encryption at all, because it prevents such „system was switched off by accident“ attacks.
They have kits that allow them to unplug the server from the wall without interrupting power supply, specifically so they don't lose the decryption keys.
Sure, but in reality I'm more interested in not letting any low paid tech dude in the DC access to my data just because it can pull a drive. Or someone who buys the server from the provider.
Maybe I have a server at home, with a locked cabinet and vibration sensors, that houses a server or two and they all use full disk encryption, but I still want to be able to reboot them without having to connect a physical keyboard to them. So no one has physical access, not even me, but I still want to be able to reboot them.
Or countless of other scenarios where it could be useful to be able to remotely unlock FDE.
That's not a counter-argument. You are protecting the physical access, and your threat model doesn't include someone willing to bypass your locks and sensors. (or it does and you just didn't go into those details.)
The argument was that physical access gives up the FDE key.
When rebooting a FileVault encrypted machine, where it normally "hangs" asking for a user to unlock it, you can now SSH into the machine, but instead of getting a prompt it interprets your SSH login as a user logging in, hangs up, and proceeds to booting up.
Huuuh, is this true? Where is the Apple docs for this? Recently tried to setup a headless macos machine and all my searching led me to either having to do autologin or disabling FileVaukt fully.
The "bad news" about Tahoe is largely overblown unless you hang out in the control center (or whatever they call that notification area) all day, which is the only place you'll actually notice Liquid Glass on a Mac.
I'd love to see this in the bootloader, along with a selection of binaries useful for recovery. Might sound silly but over the years I have had many a remote system get to the bootloader and then no further after an upgrade. Nowadays we've usually got a nicely sized EFI partition, why not stuff it all in there? Gimme a full Linux userspace from the bootloader, it would feel luxurious when I'm up at 3 am trying to recover a broken system halfway across the country.
Or is there already a solution to this that I've been missing? (Yeah, KVM/IPMI/etc, I know, but not all hosters make it easy to get to that.)
In new installs you do stuff everything in EFI partition and skip the old /boot partition as such.
The better solution is to use tpm, unified kernel image and secure boot skipping the network unlock.
The whole process is like this -
1. enable secure boot;
2. generate and install your own secure boot keys (using sbctl);
3. use clevis to enable automatic unlocking of the root fs only when secure boot check passes;
4. generate the unified kernel image (in EFI partition) that is signed by your secure boot key;
4. use efibootmgr to enable booting of said kernel image.
(5.) If your CPU supports it, enable memory encryption in BIOS (to mitigate cold boot attacks).
The unified kernel image doesn't accept additional kernel parameters, so only parameters that are set during generation of the initram are used. The secure boot makes sure no one else has tampered with the boot chain. And TPM stores the disk key securely.
You can still add some additional network level check to make sure that your computer is in your expected location before unlocking.
And you can also include some recovery tools + dropbear in your initram (within the unified kernel image), if you expect that you will have to do some recovery from the other side of the world.
The solution is "don't apply untested upgrades to critical servers at 3am" :)
If you must do such upgrades, solutions include hot standby hardware, IPMI, an on-site tech with a screen and keyboard, or moving everything to the cloud.
Sounds like you want ZFSBootMenu.org which offers remote SSH access with FDE in addition to snapshots in case of update falures or other issues. As long as you don't format the disk itself or wipe the ZFSBootMenu efi file you can recover and revert from anything remotely.
> Because initramfs is just a (mostly) normal Linux system, that means it has its own init PID 1. On Arch, that PID is in fact just systemd.
Debian has (or had; at least my Devuan still has) a simple shell script as first init. Was an interesting read and helped me understand were to add my remote rootfs decryption.
The `base` hook installs the shell PID 1, the `systemd` hook installs systemd as PID1. The default hook setup was changed with the latest'ish release to default too the `systemd` hook setup.
Aside from enrolling a token with the TPM to unlock the LUKS volume, this is actually a pretty novel idea. Perfect for older hardware without TPM. I guess it depends on your use-case.
I have something similar set up to unlock the drives on my home server. Just the SSH in initramfs though, tailscale is pretty cool.
I've done stuff with mkinitcpio / initramfs on arch before, can't remember exactly what for. I still run arch on my main laptop. I'm running nixos on my home server though, and adding something like this is so easy by comparison.
There is an old but still reasonable solution with mkinitcpio hooks encrypt/sd-encrypt + ssh, which is very easy to set up with EFI or grub2 onward. Tailscale is probably overkill for this use case, given that you're already exposing pre-/early- boot to the network by setting up interfaces that early. This became much more hermetic with secureboot and TPMs, too.
TPM definitely rises the effort by a lot to break it. But by default the communication with it is not encrypted, so especially for modules not built into the cpu wire/bus-tapping is a thing.
I currently have dropbear-ssh presenting the LUKS password prompt on my home server, but that has the very annoying quality that there's no way to do it from the console if you set that up too.
It's not a huge problem but it certainly means some recovery scenarios would be painful.
I'm vaguely reminded of some of the third party disk encryption/preboot management utilities that exist in the Windows space that leverage similar technology. Authentication is done against an online source, and only then is the key sent back to the local machine to unlock the disk. The Bitlocker key is kept nowhere near the local TPM.
I've only seen it on some paranoid-level devices in industry (typically devices handling biometric identity verification services).
IIRC this one is a Linux image that boots up, unlocks the normal Bitlocker partition via whatever mechanism you need, then hands control back to the Windows bootloader to continue onwards.
An equivalent, but simpler, solution would be to use a network-based KVM, like PiKVM. You connect a USB connector to the PiKVM so it can simulate a keyboard (and mouse), an HDMI connector so it can show you what’s on the server screen, and you also connect a special cable to the server motherboard power and reset pins, so the PiKVM can “press” those buttons remotely as well.
Mandos works with initramfs images created by both initramfs-tools and Dracut, and is present in Debian since 2011, so no need to use a third-party package.
I want something like "kexec" but taking over the entire hardware at the lowest possible level.
Edit, found this:
https://github.com/marcan/takeover.sh
But it's not as low level as I hoped, though it keeps the network running which is nice :)
Mandos is also for Debian (and all Debian-based systems), and not Red Hat (although there is nothing preventing a port, since Mandos now also supports initramfs images created by Dracut).
Finally, Mandos was initially created in in 2007, many years before tang/clevis, and literally by a person in a dorm room, not by whatever academics/scientists who seem to have dreamed up the cryptography/protocol tangle that is tang/clevis. Mandos has changed some since then, from initially using broadcast UDP and X.509 certificates, to using DNS-SD and raw public keys², but has otherwise remained very similar to its initial design.
1. <https://www.dns-sd.org/>
2. <https://www.rfc-editor.org/rfc/rfc7250>
I think FOSDEM had a talk a few years ago about that solution and their tang server. It is very similar in concept. They use McCallum-Relyea exchange, and we use PGP encryption, but the basics are the same in that you need information stored at the server and information stored in the unencrypted initramfs in order to be combined and turned into the encryption key. If my understanding of the McCallum-Relyea exchange is correct, they also combine the key creation and perfect forward security of the transaction into a single protocol, while we use TLS and client keys for the perfect forward security aspect and PGP for data at rest.
With tang you need to verify the hash of the tang server in order to prevent a MITM threat from extracting the server side part during the key creation process, while with Mandos you configure it manually with configuration files.
Tang uses REST and JSON. Mandos sends a single protocol number and then sends the data. Both operate primarily through local LAN, through Mandos client also support as an option to use global reachable ip addresses.
Outside of those design differences, there are some packaging aspects. Tang was designed for Redhat/Dracut, while Mandos was designed for Debian and initramfs-tools (Today Mandos also support Dracut, but it is not packaged for Redhat). Redhat packaging has been requested multiple times for Mandos, but neither of us two developer are Redhat users.
To add some historical context, I recall a Debconf BoF by the developers of initramfs-tools about if they were going to continue developing initramfs-tools or giving up and port everything to Dracut, and the silence was fairly deafening. People did not want to give up on initramfs-tools, but everyone recognized the massive duplication that those two project are. Similarly, the reason why Redhat had chosen initially to develop Dracut rather than just port initramfs-tools is also fairly big mystery, and is generally considered a Not Invented Here syndrome. Today there are however some distinct difference in design between the two systems.
I set up what you are suggesting (sort of anyway[1]) on a personal VPS to reboot after updates, that require one. I just generate an initrd in tmpfs that contains a keyfile[2] and kexec with that ephemeral initrd; The newest kernel can be found by looking at what the /boot/vmlinuz symlink points to. Been running this for years. It is 100% reliable, and simple. And, for the purposes of this box, secure enough.
For remote unlocks from initial power on, Debian has had that since forever using keyscripts and dropbear in the initrd.
[1] You could pull the key from memory, and use that to unlock the disk from within the generated initrd, but it would be more work than just setting up a keyfile in advance. It was my first thought as well.
[2] Easiest way was to use a mount namespace to use a diff crypttab file that points to the keyfile, since cannot specify crypttab location when creating the initrd. E.g.,
(mkinitramfs is usually wrapped by update-initramfs, but calling it directly allows specifying a location)Also most distros don't support using kexec for kernel upgrades anyway.
Then you turn off the server, and just start a vm with the captured init and capture the key.
Now you can decrypt the server offline with all the time in the world.
That only works with RAID 1. If the server uses RAID 5 or RAID 6, this won’t work.
> extract what you need
Well, yes. This is addressed in the FAQ.
> or change the image.
> Then you turn off the server, and just start a vm with the captured init and capture the key.
Well, as explained in the FAQ, an attacker will have to do so quickly, before the Mandos server decides that the Mandos client has been offline for too long, and disables that client. The default value is five minutes, but is configurable per client.
5 minutes is plenty to boot initrd from a vm... what's that gonna take? 10 seconds?
With RAID 5, pulling 1/3 gives you only partial data, and pulling 2/3 removes too much so the system can't run.
But even if it is, you could just pull one after the other and wait for the resilver before pulling the next one (you will hear if it resilvers automatically)
1. SSH into the Pi
2. Issue the Wake-on-LAN packet to boot the server
3. Tunnel that server's SSH port to my laptop
4. SSH into the initramfs SSH server (I use TinySSH), enter the decryption key
5. Wait for server to come up, then access via Tailscale
This is more complicated than the author's setup in that it requires another device (the Pi), but it's simpler in that you don't need to have the initramfs on your Tailnet.
If you set this up once ("this" meaning adding networking, SSH and tailscale inside initramfs), you can just do the same thing for the next server you set up, and you don't have to worry about the failure of one node affecting the other(s).
But scaling also isn't really a parameter I (or the author) are optimizing for: we have a single beefy server we do all our work on, and a thin laptop client we want to access the server from, remotely and booting an encrypted root partition.
I don't necessarily understand the deployment question. If it's about the Raspberry Pi, I just do my updates when I don't need to use it to boot the server.
https://github.com/gsauthof/dracut-sshd
I use OpenSuse so I had to use the guide for Fedora, but there were some differences as far as I remember.
Fedora, RHEL, CentOS, Rocky, Alma, Arch, and Gentoo
Dracut is available on:
Debian and Ubuntu
That covers most common Linux distros.
-
Personally, I'm using this on Fedora.
/usr/lib/dracut/modules.d/46cryptssh:
cryptsshd.service: That encrypts the SSH host key using a password sealed with PCR7, which is invalidated if an attacker disables Secure Boot or tampers with the enrolled keys. Thus, an attacker can't extract the key from the drive or by modifying the kernel command line to boot to a shell (since that's not allowed without disabling secure boot).It's still probably vulnerable to a cold boot attack, since the key is decrypted CPU-side. It would be interesting to perform the actual key operations on the TPM itself to prevent this.
Other options that I've investigated that involve having a second server:
* A second server with Tang, and Clevis in the initramfs OS
* Keylime
Putting tailscale in the initramfs, and then updating the certs on a frequent enough schedule, seems risky to me. I've already played around with limine enough that I know I don't want to install much in the initramfs...
Realistically for a home server what you are worried about is someone breaking in and selling your drives on Facebook marketplace rather than the FBI raiding your nextcloud server. So TPM automated unlock is perfectly sufficient.
If someone steals the entire machine, the drives will unlock themselves automatically. I don't think it's worth the risk to assume a hypothetical thief is too lazy to check if there's any valuable data on the disks. At the very least, they'll probably check for crypto wallets.
With something like Clevis and Tang, you can set it up so it only auto unlocks while connected to your home network, or do something more complex as needed
Of course, a thief could try to bypass the login screen by e.g. booting with a different kernel command line, or a different initramfs. If you want to avoid this vulnerability, TPM unlock can be configured as a very fragile house of cards - the tiniest change and it falls down. The jargon for this is "binding to PCRs"
https://news.ycombinator.com/item?id=46676919
Having key off-machine mitigates a lot of that.
> Unless the junkie who stole your server has an unpatched debian login bug,
the key for disk decryption is in memory at that point. There are methods to take it out of it
E.g. systemd measures basically everything that is part of the boot process (kernel, kernel cli, initrd, ...[1]) into different PCRs, so if any of those are different they result in differen PCR values and won't unlock the boot device (depending on which PCRs you decided to encrypt against). I forgot what excatly it measures, but I remember that some PCRs also get measured during the switch_root operation from initrd -> rootfs which can be used to make something only unlock in the initrd.
[1]: https://systemd.io/TPM2_PCR_MEASUREMENTS/
If this fails you can always manually input the decryption key and reregister with the TPM. The whole point of this setup is you can't just use a bootable USB to reset the devices password.
You'll be dropped into "enter disk crypt password please" prompt.
Just use dm-verity for remote servers.
If confidentiality during remote unlock matters, seal the LUKS key in TPM2 tied to PCR values using systemd-cryptenroll or use Clevis with Tang over TLS with strict server certificate pinning, accept the operational cost of re-sealing keys after kernel or firmware updates, and keep an offline recovery key because trusting the local console is asking for trouble.
If you have disk encryption, your data now requires the police to force you to produce a password, which may or may not be within their powers, depending on the jurisdiction.
It’s strictly better to have full disk encryption and remote unlocking than no disk encryption at all, because it prevents such „system was switched off by accident“ attacks.
They have kits that allow them to unplug the server from the wall without interrupting power supply, specifically so they don't lose the decryption keys.
Maybe I have a server at home, with a locked cabinet and vibration sensors, that houses a server or two and they all use full disk encryption, but I still want to be able to reboot them without having to connect a physical keyboard to them. So no one has physical access, not even me, but I still want to be able to reboot them.
Or countless of other scenarios where it could be useful to be able to remotely unlock FDE.
The argument was that physical access gives up the FDE key.
When rebooting a FileVault encrypted machine, where it normally "hangs" asking for a user to unlock it, you can now SSH into the machine, but instead of getting a prompt it interprets your SSH login as a user logging in, hangs up, and proceeds to booting up.
The bad news: `The capability to unlock the data volume over SSH appeared in macOS 26 Tahoe.`
Or is there already a solution to this that I've been missing? (Yeah, KVM/IPMI/etc, I know, but not all hosters make it easy to get to that.)
The better solution is to use tpm, unified kernel image and secure boot skipping the network unlock.
The whole process is like this -
1. enable secure boot;
2. generate and install your own secure boot keys (using sbctl);
3. use clevis to enable automatic unlocking of the root fs only when secure boot check passes;
4. generate the unified kernel image (in EFI partition) that is signed by your secure boot key;
4. use efibootmgr to enable booting of said kernel image.
(5.) If your CPU supports it, enable memory encryption in BIOS (to mitigate cold boot attacks).
The unified kernel image doesn't accept additional kernel parameters, so only parameters that are set during generation of the initram are used. The secure boot makes sure no one else has tampered with the boot chain. And TPM stores the disk key securely.
You can still add some additional network level check to make sure that your computer is in your expected location before unlocking.
And you can also include some recovery tools + dropbear in your initram (within the unified kernel image), if you expect that you will have to do some recovery from the other side of the world.
If you must do such upgrades, solutions include hot standby hardware, IPMI, an on-site tech with a screen and keyboard, or moving everything to the cloud.
Debian has (or had; at least my Devuan still has) a simple shell script as first init. Was an interesting read and helped me understand were to add my remote rootfs decryption.
https://salsa.debian.org/kernel-team/initramfs-tools/-/blob/...
The `base` hook installs the shell PID 1, the `systemd` hook installs systemd as PID1. The default hook setup was changed with the latest'ish release to default too the `systemd` hook setup.
Shell `init`; https://gitlab.archlinux.org/archlinux/mkinitcpio/mkinitcpio...
Give it a go: https://aur.archlinux.org/packages/mkinitcpio-wifi
I've done stuff with mkinitcpio / initramfs on arch before, can't remember exactly what for. I still run arch on my main laptop. I'm running nixos on my home server though, and adding something like this is so easy by comparison.
https://news.ycombinator.com/item?id=46676919
I once built a demo-ish encrypted network boot system using similar initrd techniques. It's a fun hack working in the preboot environment.
It's not a huge problem but it certainly means some recovery scenarios would be painful.
I've only seen it on some paranoid-level devices in industry (typically devices handling biometric identity verification services).
IIRC this one is a Linux image that boots up, unlocks the normal Bitlocker partition via whatever mechanism you need, then hands control back to the Windows bootloader to continue onwards.
https://winmagic.com/en/products/full-disk-encryption-for-wi...
https://pikvm.org/ and similar exist.
This is maybe not as good as the article solution, because it requires you to secure the pi too.