Well I was wondering when the war on general computing and computer ownership would be carried into the heart of the open source ecosystems.
Sure, there are sensible things that could be done with this. But given the background of the people involved, the fact that this is yet another clear profit-first gathering makes me incredibly pessimistic.
This pessimism is made worse by reading the answers of the founders here in this thread: typical corporate talk. And most importantly: preventing the very real dangers involved is clearly not a main goal, but is instead brushed off with empty platitudes like "I've been a FOSS guy my entire adult life...." instead of describing or considering actual preventive measures. And even if the claim was true, the founders had a real love for the hacker spirit, there is obviously nothing stopping them from selling to the usual suspects and golden parachute out.
I was really struggling to not make this comment just another snarky, sarcastic comment, but it is exhausting. It is exhausting to see the hatred some have for people just owning their hardware. So sorry, "don't worry, we're your friends" just doesn't cut it to come at this with a positive attitude.
The benefits are few, the potential to do a lot of harm is large. And the people involved clearly have the network and connections to make this an instrument of user-hostility.
I do sort of wonder if there’s room in my life for a small attested device. Like, I could actually see a little room for my bank to say “we don’t know what other programs are running on your device so we can’t actually take full responsibility for transactions that take place originated from your device,” and if I look at it from the bank’s point of view that doesn’t seem unreasonable.
Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.
Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.
I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.
But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.
"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.
Why should I need a separate device? Doesn't a hardware security token suffice? I wouldn't even mind bringing my own but my bank doesn't accept them last I checked. (Do any of them?)
If the bank can't be bothered to either implement support for U2F or else clearly articulate why U2F isn't sufficient then they don't have a valid position. Anything else they say on the matter should be disregarded.
You shouldn't need a separate device, but we are quickly entering an era where a lot of banking (and other) apps will outright refuse to run or allow logins if it detects a rooted device, or play integrity fails.
In this way, the banks are asserting control over your device. It's beyond authentication, they are saying "If you have full control over your device, you cannot access our services."
I'll agree with you that they don't have a valid position, because I can just as easily open up a web browser on said rooted device and access just fine via the web, but how long until services move away from web interfaces in favor of apps instead to assert more control?
A hardware token would not suffice. When you login with a hardware token it will generate some sort of token or cookie for further requests. This is where malware can steal that key and use it for whatever it wants. There is a benefit it knowing there is a high chance that the such a key is protected by the operating system's sandboxing technology. Without remote attestation you don't know if the sandbox is actually active or not.
On the contrary, a hardware token will suffice to thwart both phising and MitM which covers ~everything for all practical threat and liability models. What exactly is the concern here? A widespread worm that no one is yet aware of that's dumping people's bank accounts into crypto? It might make for a decent Hollywood plot but is pulling that off actually easier than attacking the bank directly?
Keep in mind that the businesses pushing this stuff still don't support U2F by and large. When I can go down in person to enroll a hardware token I might maybe consider listening to what they have to say on the subject. Maybe. (But probably not.)
Citation needed. The fact that the infosec industry just keeps growing YoY kinda suggests that there are in fact issues that are more expensive than paying the security companies.
This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].
These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.
I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".
> [T]he war on general computing and computer ownership [...] It is exhausting to see the hatred some have for people just owning their hardware.
The integrity of a system being verified/verifiable doesn't imply that the owner of the system doesn't get to control it.
This sort of e2e attestation seems really useful for enterprise or public infrastructure. Like, it'd be great to know that the ATMs or transit systems in my city had this level of system integrity.
You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
Yeah, as I am reading the landing page, the direction seems clear. It sucks, because as an individual there is not much one can do, and there is no consensus that it is a bad thing ( and even if there was, how to counter it ). Honestly, there are times I feel lucky to be as dumb as I am. At least I don't have the same responsibility for my output as people who create foundational tech and code.
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.
They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.
This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.
Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.
Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.
The solution first adopted by the TCG (TPM specification v1.1) required a trusted third-party, namely a privacy certificate authority (privacy CA). Each TPM has an embedded RSA key pair called an Endorsement Key (EK) which the privacy CA is assumed to know. In order to attest the TPM generates a second RSA key pair called an Attestation Identity Key (AIK). It sends the public AIK, signed by EK, to the privacy CA who checks its validity and issues a certificate for the AIK. (For this to work, either a) the privacy CA must know the TPM's public EK a priori, or b) the TPM's manufacturer must have provided an endorsement certificate.) The host/TPM is now able to authenticate itself with respect to the certificate. This approach permits two possibilities to detecting rogue TPMs: firstly the privacy CA should maintain a list of TPMs identified by their EK known to be rogue and reject requests from them, secondly if a privacy CA receives too many requests from a particular TPM it may reject them and blocklist the TPMs EK. The number of permitted requests should be subject to a risk management exercise. This solution is problematic since the privacy CA must take part in every transaction and thus must provide high availability whilst remaining secure. Furthermore, privacy requirements may be violated if the privacy CA and verifier collude. Although the latter issue can probably be resolved using blind signatures, the first remains.
AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.
If I'm reading any of this correctly, this doesn't apply to hardware attestation.
It seems apple has a service, with an easily rotated key and an agreement with providers. If the key _Apple_ uses is compromised, they can rotate it.
BUT, apple knows _EXACTLY_ who I am. I attest to them using my hardware, they know _EXACTLY_ which hardware I'm using. They can ban me or my hardware. They then their centralized service gives me a blind token. But apple, may, still know exactly who owns which blind tokens.
However, I cannot generate blind tokens on my own. I _MUST_ talk to some centralized service that can I identify me. If that is not the case, then any single compromised device can generate infinite blind tokens rending all the tokens useless.
I don't think that a 100% anonymous attestation protocol is what most people need and want.
It would be sufficient to be able to freely choose who you trust as proxy for your attestations *and* the ability to modify that choice at any point later (i.e. there should be some interoperability). That can be your Google/Apple/Samsung ecosystem, your local government, a company operating in whatever jurisdiction you are comfortable with, etc.
But what's it attesting? Their byline "Every system starts in a verified state and stays trusted over time" should be "Every system starts in a verified state of 8,000 yet-to-be-discovered vulns and stays in that vulnerable state over time". The figure is made up but see for example https://tuxcare.com/blog/the-linux-kernel-cve-flood-continue.... So what you're attesting is that all the bugs are still present, not that the system is actually secure.
It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.
Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.
Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.
Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.
For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage
Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.
It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing
The scandal you cited was that guns controlled by the federal government don't have any obvious reasonable path to being owned by criminals; there isn't an obvious reason for the guns to have left the possession of the government in the first place.
There's not really an equivalent here for a computer owned by an individual because it's totally normal for someone to sell or dispose of a computer, and no one expects someone to be responsible for who else might get their hands on it at that point. If you prove a criminal owns a computer that I owned before, then what? Prosecution for failing to protect my computer from thieves, or for reselling it, or gifting it to a neighbor or family friend? Shifting the trust doesn't matter if what gets exposed isn't actually damaging on any way, and that's what the parent comment is asking about.
The first two examples you give seem to be about an unscrupulous government punishing someone for owning a computer that they consider tainted, but it honestly doesn't seem that believable that a government who would do that would require a burden of proof so high as to require cryptographic attestation to decide on something like that. I don't have a rebuttal for "an organization trying to avoid cross-tenant linkage" though because I'm not sure I even understand what it means: an example would probably be helpful.
I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.
I'm confused. People talking about remote attestation which I thought was used for stuff like SGX. A system in an otherwise untrusted state loads a blob of software into an enclave and attests to that fact.
Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).
I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.
Please don't bring attestation to common Linux distributions. This technology, by essence, moves trust to a third party distinct of the user. I don't see how it can be useful in any way to end users like most of us here. Its use by corporations has already caused too much damage and exclusion in the mobile landscape, and I don't want folks like us becoming pariahs in our own world, just because we want machines we bought to be ours...
A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
To anyone thinking not possibile, we already switched inits to systemd. And being persnickety saw mariadb replace mysql everywhere, libreoffice replace open office, and so on.
All the recent pushiness by a certain zealotish Italian debian maintainer, only helps this case. Trying to degrade Debian into a clone of Redhat is uncooth.
> A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
This misunderstands why systemd succeeded. It included several design decisions aimed at easing distribution maintainers' burdens, thus making adoption attractive to the same people that would approve this adoption.
If a systemd fork differentiates on not having attestation and getting rid of an unspecified set of "all the silly parts", how would they entice distro maintainers to adopt it? Elaborating what is meant by "silly parts" would be needed to answer that question.
Attestation is a critical feature for many H/W companies (e.g. IoT, robotics), and they struggle with finding security engineers who expertise in this area (disclaimer: I used to work as a operating system engineer + security engineer). Many distros are not only designed for desktop users, but also for industrial uses. If distros ship standardized packages in this area, it would help those companies a lot.
This is the problem with Linux in general. It's way too much infiltrated by our adversaries from big tech industry.
Look at all the kernel patch submissions. 90% are not users but big tech drones. Look at the Linux foundation board. It's the who's who of big tech.
This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial, the BSDs started commercial but are hardly still used as such and are mostly user driven now (yes there's a few exceptions like netflix, netgate, ix etc but nothing on the scale of huawei, Amazon etc)
As a complete guess, I would say that 90% of Linux systems are run by "big tech drones". And also by small companies using technology.
Open source operating systems are not a zero sum game. Yes there is a certain gravitational pull from all the work contributed by the big companies. If you aren't contributing "for-hire", then you choose what you want to work on, and what you want to use.
Linux has been majority developed by large tech companies for the last 20+ years. If not for them, it would not be anywhere close to where it is today. You may not like this fact, but it's not really a new development nor something that can be described as infiltration. At the end of the day, maintaining software without being paid to do so is not generally sustainable.
General purpose operating systems are fine and in some cases, preferable. However, they should be small, simple and designed with first class portability. Linux is none of those.
Why shouldn't they use the kernel, systemd, and a few core utilities? Why reinvent the wheel? There's nothing requiring them to pull in a typical desktop userspace.
I agree but it's difficult to argue against it. There is just so much you get for free by starting with a Linux distro as your base. Developing against alternatives is very expensive and developing something new is even more expensive. The best we can hope for is that someone with deep pockets invests in good alternatives that everyone can benefit from.
I'm not too big in this field but didn't many of those same IOT companies and the like struggle with the packages becoming dependent on Poeterings work since they often needed much smaller/minimal distros?
I don't think this is generally true. If you are running Linux in your stack, your device probably is investing in 1GiB+ RAM and 2GiB+ of flash storage. systemd et al are not a problem at that point. Running a UI will end up being considerably more costly.
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
Whoever uses this seeks to ensure a certain kind of behavior on a machine they typically don't own (in the legal sense of it). So of course you can make it optional. But then software that depends on it, like your banking Electron app or your Steam game, will refuse to run... so as the user, you don't really have a choice.
I would love to use that technology to do reverse attestation, and require the server that handles my personal data to behave a certain way, like obeying the privacy policy terms of the EULA and not using my data to train LLMs if I so opted out. Something tells me that's not going to happen...
I’m skeptical about the push toward third-party hardware attestation for Linux kernels. Handing kernel trust to external companies feels like repeating mistakes we’ve already seen with iOS and Android, where security mechanisms slowly turned into control mechanisms.
Centralized trust
Hardware attestation run by third parties creates a single point of trust (and failure). If one vendor controls what’s “trusted,” Linux loses one of its core properties: decentralization. This is a fundamental shift in the threat model.
Misaligned incentives
These companies don’t just care about security. They have financial, legal, and political incentives. Over time, that usually means monetization, compliance pressure, and policy enforcement creeping into what started as a “security feature.”
Black boxes
Most attestation systems are opaque. Users can’t easily audit what’s being measured, what data is emitted, or how decisions are made. This runs counter to the open, inspectable nature of Linux security today.
Expanded attack surface
Adding external hardware, firmware, and vendor services increases complexity and creates new supply-chain and implementation risks. If the attestation authority is compromised, the blast radius is massive.
Loss of user control
Once attestation becomes required (or “strongly encouraged”), users lose the ability to fully control their own systems. Custom kernels, experimental builds, or unconventional setups risk being treated as “untrusted” by default.
Vendor lock-in
Proprietary attestation stacks make switching vendors difficult. If a company disappears, changes terms, or decides your setup is unsupported, you’re stuck. Fragmentation across vendors also becomes likely.
Privacy and tracking
Remote attestation often involves sending unique or semi-unique device signals to external services. Even if not intended for tracking, the capability is there—and history shows it eventually gets used.
Potential for abuse
Attestation enables blacklisting. Whether for business, legal, or political reasons, third parties gain the power to decide what software or hardware is acceptable. That’s a dangerous lever to hand over.
Harder incident response
If something goes wrong inside a proprietary attestation system, users and distro maintainers may have little visibility or ability to respond independently.
I can see usefulness if the flow was "the device is unlocked by default, there are no keys/certs on it, and it can be reset to that state (for re-use purpose)"
Then the user can put their own key there (if say corporate policies demand it), but there is no 3rd party that can decide what the device can do.
But having 3rd party (and US one too!) that is root of all trust is a massive problem.
The giveaway is that LLMs love bulleted lists with a bolded attention-grabbing phrase to start each line. Copy-pasting directly to HN has stripped the bold formatting and bullets from the list, so the attention-grabbing phrase is fused into the next sentence, e.g. “Potential for abuse Attestation enables blacklisting”
This seems like the kind of technology that could make the problem described in https://www.gnu.org/philosophy/can-you-trust.en.html a lot worse. Do you have any plans for making sure it doesn't get used for that?
I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.
I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.
History is pretty consistent here:
WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.
Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.
Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.
Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.
Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.
Common failure modes:
Enterprise customers demand guarantees - policy creeps in.
Liability enters the picture - defaults shift to "safe for the company."
Revenue depends on trust decisions - neutrality erodes.
Core maintainers lose leverage - architecture hardens around control.
Even if keys are user-controlled today, the key question is architectural:
Can this system resist those pressures long-term, or does it merely promise to?
Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.
I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.
Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?
I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.
But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.
The value is being able to easily and robustly verify that my device hasn't been compromised. Binding disk encryption keys to the TPM such that I don't need to enter a password but an adversary still can't get at the contents without a zero day.
Of course you can already do the above with secure boot coupled with a CPU that implements an fTPM. So I can't speak to the value of this project specifically, only build and boot integrity in general. For example I have no idea what they mean by the bullet "runtime integrity".
> For example I have no idea what they mean by the bullet "runtime integrity".
This is for example dm-verity (e.g. `/usr/` is an erofs partiton with matching dm-verity). Lennart always talks about either having files be RW (backed by encryption) or RX (backed by kernel signature verification).
I don’t think attestation can provide such guarantees. To best of my understanding, it won’t protect from any RCE, and it won’t protect from malicious updates to configuration files. It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware). IDSes are sufficient for this purpose, without negative side effects.
And that’s why I said “not a security mechanism”. Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.
I think all of that comes down to being a matter of what precisely you're attesting? So I'm not actually clear what we're talking about here.
Given secure boot and a TPM you can remotely attest, using your own keys, that the system booted up to a known good state. What exactly that means though depends entirely on what you configured the image to contain.
> it won’t protect from malicious updates to configuration files
It will if you include the verified correct state of the relevant config file in a merkel tree.
> It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware).
Shouldn't it permit running arbitrary binaries that you have signed? That places the root of trust with the build environment.
Now if you attempt to compile binaries and then sign them on the production system yeah that would open you up to attack (if we assume a process has been compromised at runtime). But wasn't that already the case? Ideally the production system should never be used to sign anything. (Some combination of SGX, TPM, and SEV might be an exception to that but I don't know enough to say.)
> Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.
If you remotely boot a box sitting in a rack on the other side of the world how can you be sure it hasn't been compromised? However you go about confirming it, isn't that what attestation is?
Well, maybe we're talking about different things, because I've asked from a regular GNU/Linux user perspective. That is, I have my computers and I'm concerned I would lose my freedoms to use them as I wish, because this attestation would be adopted and become de-facto mandatory if I ever want to do something online. Just like what happened to mobile, and what's currently slowly happening to other desktop OSes.
Production servers are a whole different story - it's usually not my hardware to begin with. But given how things are mostly immutable those days (shipped as images rather than installed the old-fashioned sysadmin way), I'm not really sure what to think of it...
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.
But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.
This whole discussion is a perfect example of what Upton Sinclair said, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
A rational and intelligent engineer cannot possibly believe that he'll be able to control what a technology is used for after he creates it, unless his salary depends on him not understanding it.
Insinuation? As a sw dev they don't have any agency over whether or by whom they get acquired. Their decision will be whether to leave if it's changing to the worse, and that's very much understandable (and arguably the ethical thing to do).
So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.
Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!
Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.
that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
it will be railroaded through in the same way that systemD was railroaded onto us.
> The models we have in mind for attestation are very much based on users having full control of their keys.
If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.
That's the thing. I can only provide a piece of software with the guarantee it can run on my OS. It can trust my kernel to let it run, but shouldn't expect anything more. The editor is free to run code it wants to guarantee the integrity of on its own infrastructure; but whatever reaches my machine _may_ at best run as the editor intends.
Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.
It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.
This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.
Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.
One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.
The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.
One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.
Even those not misusing the tech may find themselves worse off in such a world.
Why again should we trust this technology just because you are a good person?
TLDR We already know how this will be misused to take away people's freedom not to run their own software stack but to dissent against fascism. It's immoral to build even with the best intentions.
You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.
Unfortunately the parent commenter is completely right.
The attestation portion of those systems is happening on locked down devices, and if you gain ownership of the devices they no longer attest themselves.
This is the curse of the duopoly of iOS and Android.
BankID in Sweden will only run with one of these devices, they used to offer a card system but getting one seems to be impossible these days. So you're really stuck with a mobile device as your primary means of identification for banking and such.
There's a reason that general purpose computers are locked to 720p on Netflix and Disney+; yet AppleTV's are not.
Afaik bankid will actually run as long as you can install play store (IE the device don't need Google certificate), which isn't great but a little bit better than what it could have been.
That can't be right. My onyx boox note air 2 eInk tablet lets me install the google play store by registering myself as an AOSP developer and enrolling my device's serial number or GSF identifier with Google using some Google Form that some android team somewhere's automated by now. The device has no hardware security features from what I can tell. There's no way this platform would pass muster with any bank.
At least BankId (digital ID thing in Sweden) and some of the Swedish banking apps don't care about if you are rooted on stock Android. I haven't tried custom ROMs in many years, but perhaps it is time for GrapheneOS these days.
Now, if you want to use your phone as a debit/credit card substitute that is different (Google Pay cares, and I don't use it thus).
Anyway, why should banking apps care? It is not like they care when I use the bank from Firefox on my Linux laptop.
> There's no way this platform would pass muster with any bank
"Any bank"? Although the bank I use locks NFC payments behind such checks (which is not a big loss since a physical debit card offers the same functionality), anything else still works otherwise. Most of the things are available through the website (which fits well on mobile too), and mobile BLIK payments can be done from the Android app which works inside Waydroid with microG.
There's no reason other banks can't work the same way and it's outraging when they don't. Look around for a better bank.
I have the successor device, the Boox Note Air 2, and don't remember how I installed Google Play on it, it was so easy as to be not even notable. Though almost everything I use is available on F-Droid other than my fancy calendar and contacts applications.
Banks don't use these things because they provide any real security. They use them because the platform company calls it a "security feature" and banks add "security features" to their checklists.
The way you defeat things like that is through political maneuvering and guile rather than submission to their artificial narrative. Publish your own papers and documentation that recommends apps not support any device with that feature or require it to be off because it allows malware to use the feature to evade malware scans, etc. Or point out that it prevents devices with known vulnerabilities from being updated to third party firmware with the patch because the OEM stopped issuing patches but the more secure third party firmware can't sign an attestation, i.e. the device that can do the attestation is vulnerable and the device that can't is patched.
The way you break the duopoly is by getting open platforms that refuse to support it to have enough market share that they can't ignore it. And you have to solve that problem before they would bother supporting your system even if you did implement the treachery. Meanwhile implementing it makes your network effect smaller because then it only applies to the devices and configurations authorized to support it instead of every device that would permissionlessly and independently support ordinary open protocols with published specifications and no gatekeepers.
Another point is (often )the apps that banks makes are 3rd party developed by outsourcing (even if within the same developed country). If someone uses some MiTM or logcat to see some traffic and publishes it then banks get bad publicity. So to prevent this the banks, devs tell anything that is not normal (i.e) non-stock ROM is bad.
FOSS is also something many app-based software devs don't like on their products. While people in cloud, infra like it the app devs like these tools while developing or building a company but not when making end resulting apps.
If the user's device isn't compromised then everything is fine regardless of whether or not it can pass attestation. If the user's device is compromised, the device doesn't need to pass attestation to run a fake bank app and steal the user's credentials. Once the attacker has the user's credentials they can use them to transfer money regardless of whether or not they have to use a different device that can pass attestation.
It doesn't really provide any security.
On top of that, there are tons of devices that can pass attestation that have known vulnerabilities, so the attacker could just use one of those (or extract the keys from it) if they had any reason to. But in the mobile banking threat model they don't actually need to.
Remote attestation absolutely provides increased security. Mobile banking fraud rates are substantially lower than desktop/browser banking fraud. Attestation is major reason why.
I think ever compute professional needs to spend at least a year trying to secure a random companies windows network to appreciate how impossible this actually is without hardware based roots of trust like TPMs and HSMs
It's not even that. The main reason is probably that attackers are going to be writing code to automate their attacks, and desktops are easier to develop on than phones, so that's what they use with no reason to do otherwise.
Even if you stopped supporting desktops, then they would just reverse engineer the mobile app instead of the web app and extract the attestation keys from any unpatched model of phone and still run their code on a server, and then it would show up as "mobile fraud" because they're pretending to be a phone instead of a desktop, when in reality it was always a server rather than a phone or a desktop.
And even if attestation actually worked (which it doesn't), that still wouldn't prevent fraud, because it only tries to prove that the person requesting the transfer is using a commercial device. If the user's device is compromised then it doesn't matter if it can pass attestation because the attacker is only running the fake, credential stealing "bank app" on the user's device, not the real bank app. Then they can run the official bank app on an official device and use the stolen credentials to transfer the money. The attestation buys you nothing.
Well, it depends. I can now do banking from my desktop computer because there is no way our banks can attest that we're running our browsers in their approved hardware+software stack. Of course they can already disable banking from the browser but if they choose to keep it open but require attestation in your browser when it becomes possible, I don't think it's a good thing.
It would but how and who to run it? Ideally some one like Linux Foundation sits on the White house meetings or EU meetings. But they don't. Govts don't understand. I was once participating in a Youth meeting with MEPs - most of them have only iPhones. Most (not all) lawmakers live on a different planet.
Also IIRC, linux foundation etc are not interested in doing such standardisations.
This is already the world you live in just running some recent Ubuntu. Try writing, building and loading a kernel module!
Of course its all nonsense make believe, the "trust root" is literally a Microsoft signed stub. For this dummy implementation you can't modify your own kernel anymore.
Torrenting is becoming more popular again. The alternative to being allowed to pay to watch on an "insecure" device isn't switching to an attested device, it's to stop paying for the content at all. Games industry, same thing (or just play the good older games, the new ones suck anyway).
Finances, just pay everything by cheque or physical pennies. Fight back. Starve the tyrants to death where you can, force the tyrants to incur additional costs and inefficiencies where you can't.
As an end user hand assembling desktop services on non-Systemd distros (Artix, Devuan, Gentoo, Guix) over the years, and thus had no concern about APIs, Pipewire just works and PulseAudio gave endless trouble.
As another user on Gentoo, pipewire is a never ending pain in the ass full of "magic" behavior and weird bugs. I mostly skipped pulse though so it may be simple in comparison to that.
It's baffling to me that anyone can imagine pipewire has been created from scratch without any lessons learned from pulseaudio and the previous issues the audio stack on linux had, and solved, over the years. Nothing is happening in a clean room bubble, every new project stands on the shoulders of giants...
There were dozens of other init systems that, like systemd, wasn't a shell script.
What set systemd apart is the collection of tightly integrated utilities such as a dns resolver, sntp client, core dump handler, rpc-like api linking to complex libraries in the hot path and so on and so forth that has been a constant stream of security exploits for over a decade now.
This is a case where the critics were proven to be right. Complexity increases the cognitive burden.
As predicted. I thought pulseaudio should have been enough of a lesson. Besides that, any person that works on open source but that joins Microsoft is not in the camp that should have a say in the overall direction of Linux.
that on itself is not a problem. The problem is that those work worse.
For example, the part of systemd that fills DNS will put them in random order (like actual random, not "code happened to dump it in map order)
The previous, while very much NOT perfect, system, put the DNSes in order of one in latest interface, which had useful side-feature that if your VPN had different set of DNSes, it got added in front
The systemd one just randomizes it ( https://github.com/systemd/systemd/issues/27543 ) which means that using standard openvpn wrapper script for it will need to be reran sometimes few times to "roll" the right address, I pretty much have to run
Like, they decided to use binary log format. Okay, I can see advantages, it can be indexed or sharded for faster access to app's files...
oh wait it isn't, if you want to get last few lines of a service the worst case is "mmap every single journal file for hundreds of MBs of reads"
It can be optimized so some long but constant fields like bootid are not repeated...
oh wait it doesn't do that either, is massively verbose. I guess I can understand it, at least that would make it less crash-proof...
oh wait no, after crash it just spams logs that previous log file is corrupted and it won't be used.
So we have a log format that only systemd tools can read, takes few times as much space per line as text or even JSON version would, and it still craps out on unclean shutdown
They could've just integrated SQLite. Hell I literally made a lil prototype that took journalctl logs and wrote it to indexed SQLite file and it was not only faster but smaller (as there is no need to write bootid with each line, and log lines can be sharded or indexed so lookup is faster). But nah, Mr. Poettering always wanted to make a binary log format so he did.
The trick is the same: use a popular linux distribution and don't fight the kinks.
The people who had no issues with Pulseaudio; used a mainstream distribution. Those distributions did the heavy lifting of making sure stuff fit together in a cohesive way.
SystemD is very opinionated, so you'd assume it wouldn't have the same results, but it does.. if you use a popular distro then they've done a lot of the hard work that makes systemd function smooth.
I was today years old when I realised this is true for both bits of poetter-ware. Weird.
pulseaudio I had to fight every single day, with my "exotic" setup of one set of speakers and a headset
with pipewire, I've never had to even touch it
systemd: yesterday I had a network service on one machine not start up because the IP it was trying to bind to wasn't available yet
the dependencies for the .service file didn't/can't express the networking semantics correctly
this isn't some hacked up .service file I made, it's that from an extremely popular package from a very popular distro
(yeah I know, use a socket activated service......... more tight coupling to the garbage software)
the day before that I had a service fail to start because the wall clock was shifted by systemd-timesyncd during startup, and then the startup timeout fired because the clock advanced more than the timeout
then the week before that I had a load of stuff start before the time was synced, because chrony has some weird interactions with time-sync.target
it's literally a new random problem every other boot because of this non-deterministic startup, which was never a problem with traditional init or /etc/rc
for what? to save maybe a second of boot time
if the distro maintainers don't understand the systemd dependency model after a decade then it's unfit for purpose
> it's literally a new random problem every other boot because of this non-deterministic startup, which was never a problem with traditional init or /etc/rc
This gave me a good chuckle. Systemd literally was created to solve the awful race conditions and non-determinism in other init systems. And it has done a tremendous job at it. Hence the litany of options to ensure correct order and execution: https://www.freedesktop.org/software/systemd/man/latest/syst...
And outside of esoteric setups I haven't ever encountered the problems you mentioned with service files.
systemd was created to solve the problems of a directory full of shell scripts. A single shell script has completely different problems. And traditional init uses inittab, which is not /etc/init.d, and works more like runit.
runit's approach is to just keep trying to start the shell script every 2 seconds until it works. One of those worse–is–better ideas, it's really dumb, and effective. You can check for arbitrary conditions and error–exit, and it will keep trying. If you need the time synced you can just make your script fail if the time is not synced.
traditional inittab is older than that and there's not any reason to use it when you could be using runit, really.
yeah, many options that are complicated beyond the understanding of the distro maintainers, and yet still don't allow expression of common semantics required to support network services reliably
like "at least one real IP address is available" or "time has been synced"
and it's not esoteric, even ListenAddress with sshd doesn't even work reliably
the ONLY piece of systemd I've not had problems with is systemd-boot, and then it turned out they didn't write that
> like "at least one real IP address is available" or "time has been synced"
"network-online.target is a target that actively waits until the network is “up”, where the definition of “up” is defined by the network management software. Usually it indicates a configured, routable IP address of some kind. Its primary purpose is to actively delay activation of services until the network has been set up."
For time sync checks, I assume one of the targets available will effectively mean a time sync has happened. Or you can do something with ExecStartPre. You could run a shell command that checks for the most recent time sync or forces one.
Are you running this particular unit file as a user unit or a system unit? Some targets like network-online.target don't work from user unit files.
You could also try targeting NetworkManager or networkd's "wait-online" services. Or if that doesn't work, something is telling systemd that you have an IP when you don't. NetworkManager has "ipv4.may-fail" and "ipv6.may-fail" that might be errenously true.
> at that point I might as well go back to init=/etc/rc
The difference is that systemd is much better at ensuring correctness. If you write the invoked shell command properly, it'll communicate failure or success correct and systemd will then communicate that state to the unit. It's still a lot more robust than before.
Seems like you have an axe to grind with systemd because it replaced your familiar (but extremely cruddy) init system and now you refuse to debug the problem because you prefer being able to blame systemd.
There is so much granularity and flexibility in what you can do it seems rather unlikely you cannot make it happen correctly. And if it is truly a bug... open an issue? They're rather responsive to it. And it isn't like the legacy init systems were bug free from inception (well, lord knows they were still chock full of bugs even when they were replaced).
Edit: sitting here with a grin
.. HN downvoting the advice of checking logs, debugging and opening an issue. I wish the companies y'all work at good luck.. they'll need it.
I'm not a systemd hater or anything, but I continue to read stuff from Poettering which to me is deeply disturbing given the programs he works on.
Saying it's not a bug that service is launched despite a stated required prerequisite dependency failed... WTF?
Sure, I agree with him that most computers should probably boot despite NTP being unable to sync. But proposing that the solution to that is breaking Requires is just wild to me.
I'm not sure I understand why you think the solution proposed there is so bad.
The question in that issue is around the semantics of time-sync.target. Targets are synchronization points for the system and don't (afaik) generally make promises about the units that are ordered before them (in this case chrony-wait.service.
Does that answer your specific objection of "proposing that the solution to that is breaking Requires is just wild to me"? Basically, what is proposed in that issue is not breaking Requires=. The proposition is that the user add their own, specific Requires= as a drop-in configuration since that's not a generally-applicable default.
No, that does not make sense, because it goes against the systemd documentation.
Targets[1]: Target units do not offer any additional functionality on top of the generic functionality provided by units. They merely group units, allowing a single target name to be used in Wants= and Requires= settings to establish a dependency on a set of units defined by the target, and in Before= and After= settings to establish ordering.
boot-complete.target[2]: Order units that shall only run when the boot process is considered successful after the target unit and pull in the target from it, also with Requires=.
Note use of "only run" with a reference to Requires=.
time-sync.target[3]: This target provides stricter clock accuracy guarantees than time-set.target (see above), but likely requires network communication and thus introduces unpredictable delays. Services that require clock accuracy and where network communication delays are acceptable should use this target.
Especially note the last sentence there.
The documentation clearly indicates that targets can fail, and that services that needs the target to be successful, should use Requires= to specify that.
If the above is not true, the Requires= and Targets documentation should be rewritten to explicitly say that targets might fulfill Requires= regardless of state. Also, the documentation for time-sync.target should explicitly remove the last sentence and instead state there is no functional difference between Requires=time-sync.target and Wants=time-sync.target, it is best-effort only.
That's entirely defined by whatever units order themselves before network-online.target (normally a network management daemon like NetworkManager or systemd-networkd). systemd itself doesn't define the details; that's left up to how that distro and sysadmin have configured the network manager/system.
Same. I run a server with a ton of services running on it which all have what I think are pretty complex dependency chains. And I also have used Linux with systemd on my laptop. Systemd has never, once, caused me issues.
I can totally relate to this, it's gotten to the point that I'm just as scared of rebooting my Linux boxes as I was of rebooting my windows machine a couple of decades ago. And quite probably more scared.
The only parties that really cared about boot time were the big hosting providers and container schleppers. For desktop linux it never mattered as much.
PipeWire was also made by a guy with a lot of multimedia experience (GStreamer).
ALSA was kind of OK after mixing was enabled by default and if you didn't need to switch outputs of a running application between anything but internal speakers and headphones (which worked basically in hardware). With any additional devices that you could add and remove, ALSA became a more serious limitation, depending. You could usually choose your audio devices (including microphones) at least at the beginning of a video conference / playing a movie etc, but it was janky (unreliable, list of 20 devices for one multi-channel sound card) and needed explicit support from all applications. Not sure if it ever worked with Bluetooth.
I still need to use alsamixer to unmute my headphones after accidentally unplugging them and plugging them in again fails to do so. That's with PipeWire - never had that problem with just ALSA.
That's because systemd originated at RedHat. If it had been designed distribution agnostic it would have worked a lot better on other distros besides RH.
What are the non-distribution agnostic parts of systemd? Considering it runs as PID1 (usually) it kinda is the base of distros and not really built on top of any distro other than "the linux kernel".
"The trick is the same: use a popular linux distribution and don't fight the kinks."
I believe that you are genuinely being sincere here, thinking this is good advice.
But this is an absolutely terrible philosophy. This statement is ignorant as well as inconsiderate. (again, I do believbe you don't intend to be inconsiderate consciously, that is just the result.)
It's ignorant of history and inconsiderate of everyone else but yourself.
Go back a few years and this same logic says "The trick is, just use Windows and do whatever it wants and don't fight."
So why in the world are you even using Linux at all in the first place with that attitude? For dishonest reasons (when unpacked to show the double standard).
Since you are using Linux instead of Windows, then you actually are fine with fighting the tide. You want the particular bits of control you want, and as long as you are lucky enough to get whatever you happen to care about without fighting too much, then you have no sympathy for anyone else who cares aboiut anything else.
You don't see yourself as fighting any tides because you are benefitting from being able to use a mainstream distro without customizing it. But the only reason you get to enjoy any such thing at all in the first place is because a lot of other people before you fought the tide to bring some mainstream distros into existence, and actually use them for ordinary activities enough despite all the difficulties, to force at least some companies and government agencies to acknowledge them. So now you can say things like "just use a mainstream distro as it comes and don't try to do what you actually want".
> Go back a few years and this same logic says "The trick is, just use Windows and do whatever it wants and don't fight."
This is basically exactly what I saw people saying in Windows subreddits. There's one post that particularly sticks out in my memory[0] that basically had everybody telling the OP to just not make any of the changes that they wanted to make. The advice seemed to revolve around adapting to the OS rather than adapting the OS to you, and it made me sad at the time.
> The people who had no issues with Pulseaudio; used a mainstream distribution. Those distributions did the heavy lifting of making sure stuff fit together in a cohesive way.
Incorrect. I used mainstream distro, still had issues, that just solved itself moving to pipewire. Issues like it literally crashing or emitting spur of max volume noise once every few months for no discernable reason.
Pulseaudio also completely denies existence of people trying to do music on Linux, there is no real way to make latency on it be good.
> SystemD is very opinionated, so you'd assume it wouldn't have the same results, but it does.. if you use a popular distro then they've done a lot of the hard work that makes systemd function smooth.
Over the years of using the "opinion" of SystemD seems to be "if it is not problem on Lennart's laptop, it's not a real problem and it can be closed or ignored completely".
For example systemd have no real method to tell it "turn off all apps after 5 minutes regardless of what silly package maintainers think". Now what happens if you have a server on UPS that have say 5 minutes of battery and one of the apps have some problem and doesn't want to close?
In SysV, it gets killed, and system gets remounted read only. You have app crash recovery but at least your filesystem is clean
In systemd ? No option to do that. You can set default timeout but it can be override in each service so you'd have to audit every single package and tune it to achieve that. That was one bug that was closed.
Same problem also surfaced if you have say app with a bug that prevented it from closing from sigterm and you wanted to reboot a machine. Completely stuck
But wait, there is another method, systemd have an override, you can press (IIRC) ctrl+alt+delete 7 times within 2 seconds to force it to restart ( which already confuses some people that expect it to just restart machine clean(ish) regardless https://github.com/systemd/systemd/issues/11285 ).
...which is also impossible if your only method of access is software KVM where you need to navigate to menu to send ctrl+alt+del. So I made ticket with proposal to just make it configurable timeout for the CAD ( https://github.com/systemd/systemd/issues/29616 ), the ticket wasn't even read completely because Mr. Poettering said "this is not actionable, give a proposal", so I pasted the thing he decided to ignore in original ticket, and got ignored. Not even "pull requests welcome" (which I'd be fine with, I just wanted confirmation that the feature like that won't be rejected if I start writing it).
There is also issue of journald disk format being utter piece of garbage ("go thru entire journal just to get app's last few lines bad", hundreds of disk reads on simple systemctl status <appname> bad) that is consistently ignored thru many tickets from different people.
Or the issue that resolvconf replacement in systemd will just roll a dice on DNS ordering, but hey, Mr. Lennart doesn't use openvpn so it's not real issue ( https://github.com/systemd/systemd/issues/27543 )
I'm not writing it to shit on systemd and praise what was before, as a piece of software it's very useful for my job as sysadmin (we literally took tens of thousands lines of fixed init scripts out because all of the features could be achieved in unit files) and I mean "saved tons of time and few demons running" in some cases, but Mr. Poettering is showing same ignorant "I know better" attitude he got scolded at by kernel maintainers.
> Pulseaudio also completely denies existence of people trying to do music on Linux, there is no real way to make latency on it be good.
I don't care much about PA at this point tbh and don't know much about the inner workings; it always worked just fine for me. But from what I read from people more "in the know" at the time, I'd heard that a lot of the (very real) user-facing problems with PA were ultimately caused by driver and other low-level problems. Those were hacky, had poor assumptions, etc. PA ultimately exposed those failures, and largely got better over time because those problems got fixed upstream of PA.
My takeaway from what I read was basically that PA had to stumble and walk so that pipewire could run.
> For example systemd have no real method to tell it "turn off all apps after 5 minutes regardless of what silly package maintainers think". Now what happens if you have a server on UPS that have say 5 minutes of battery and one of the apps have some problem and doesn't want to close?
Add a TimeoutStopSec= to /etc/systemd/system/service.d/my-killing-dropin.conf more or less, I think? These are documented in the systemd.service and systemd.unit manpages respectively.
> Same problem also surfaced if you have say app with a bug that prevented it from closing from sigterm and you wanted to reboot a machine. Completely stuck
See the --force option on the halt, poweroff, and reboot subcommands of systemctl. The kill subcommand if you want to target that specific service.
> so I pasted the thing he decided to ignore in original ticket, and got ignored. Not even "pull requests welcome" (which I'd be fine with, I just wanted confirmation that the feature like that won't be rejected if I start writing it).
I'm certainly sympathetic to this pain point. I'd take Lennart at his word that he's not opposed. Generally speaking, from following the systemd project somewhat, it's a very busy project and it's hard for all issues to get serviced. But they're very open to PRs, generally speaking.
> Or the issue that resolvconf replacement in systemd will just roll a dice on DNS ordering, but hey, Mr. Lennart doesn't use openvpn so it's not real issue ( https://github.com/systemd/systemd/issues/27543 )
Quickly taking a peek here (and speaking as a relatively superficial user of resolved myself), isn't the proposed solution to define interface ordering?
> it will ask on all links in parallel if there's no better routing info available. In your case there is none (i.e. no ~. listed among your network interfaces), hence it will be asked on all interfaces at the same time.
Poettering gas a track record of recognizing good ideas from Apple, then implementing them poorly. He also has a track record of closing bug reports for plain and simple bugs in his software to protect his own ego, and this kind of mentality isn't a great basis for security sensitive software.
Audio server for linux: Great idea! Pulseaudio: Genuinely a terrible implementation of it, Pipewire is a drop in replacement that actually works.
Launchd but for Linux: Great idea! SystemD: generally works now at least, but packed with insane defaults and every time this is brought up with the devs they say its the distro packagers jobs to wipe SystemD's ass and clean up the mess before users see it.
Security bug in SystemD when the user has a digit in their username: Lennart closes the bug and says that SystemD is perfect, the distros erred by permitting such usernames. Insane ego-driven response.
He really will just close a ticket because he disagrees with how Linux works. I read about systemd sysusers and thought they would be neat for running containerized services. But Poettering doesn't like the /etc/subuid files and refuses to work with them.
How do I have nsresourced work in a regular systemd service or quadlet so that I can have an ephemeral user run a container? I am trying to find information and just seeing it as part of nsspawn, that seems to require a container specifically built around a root filesystem.
I am not going to struggle with systemd if I have to build containers specifically for it. If I have to rearrange everything I am doing I would just learn to do it on a minimal Kubernetes install instead.
Entities other than me being able to control what runs on the device I physically posses is absolutely not acceptable in any way. Screw your clients, screw you shareholders and screw you.
Assuming you're using systemd, you already gave up control over your system. The road to hell was already paved. Now, you would have to go out of your way to retain control.
In the great scheme of things, this period where systemd was intentionally designed and developed and funded to hurt your autonomy but seemed temporarily innocuous will be a rounding error.
Nah man, yo are FUDing. systemd might have some poor design choices and arrogant maintainers, but at least I can drop it at any time and my bank wouldn't freak out about it. This one… It's a whole another level.
I don't think Mr Pottering was brought by accident, maybe his decade of contribution making sure systemd services can be manipulated by a supervisor (in the case of wsl and ms) is a valuable asset.
Systemd don't even need to change much to become the devil itself, it just have to upstream merge changes already consolidated in the past 5 years or so...
But logically it's safe because for this to become a problem systemd would have to be adopted by the majority of distributions and its maintainers would have to concede to the pressure of big corps and such...oh, wait
My only experience with Linux secure boot so far.... I wasn't even aware that it was secure booted. And I needed to run something (I think it was the Displaylink driver) that needs to jam itself into the kernel. And the convoluted process to do it failed (it's packaged for Ubuntu but I was installing it on a slightly outdated Fedora system).
What, this part is only needed for secure boot? I'm not sec... oh. So go back to the UEFI settings, turn secure boot off, problem solved. I usually also turn off SELinux right after install.
So I'm an old greybeard who likes to have full control. Less secure. But at least I get the choice. Hopefully I continue to do so. The notion of not being able to access online banking services or other things that require account login, without running on a "fully attested" system does worry me.
Secure Boot only extends the chain of trust from your firmware down the first UEFI binary it loads.
Currently SB is effectively useless because it will at best authenticate your kernel but the initrd and subsequent userspace (including programs that run as root) are unverified and can be replaced by malicious alternatives.
Secure Boot as it stands right now in the Linux world is effectively an annoyance that’s only there as a shortcut to get distros to boot on systems that trust Microsoft’s keys but otherwise offer no actual security.
It however doesn’t have to be this way, and I welcome efforts to make Linux just as secure as proprietary OSes who actually have full code signature verification all the way down to userspace.
here is some actual security: encrypted /boot, encrypted everything other than the boot loader (grub in this case)
sign grub with your own keys (some motherboards let you to do so). don't let random things signed by microsoft to boot (it defeats the whole point)
so you have grub in an efi partition, it passes secure boot, loads, and attempts to unlock a luks partition with the user provided passphrase. if it passed secure boot it should increase confidence that you are typing you password into the legit thing
so anyway, after unlocking luks, it locates the kernel and initrd inside it, and boots
the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
however, this is still better than, at failure, letting anything run
sophisticated attackers will defeat this, but they can also add a variety of attacks at hardware level
I’d much rather have tamper detection. Encryption is great should the device is stolen but it feels like the wrong tool for defending against evil maids. All I’d want is that any time you open the case or touch the cold external ports (ie unbolted) you have to re-authenticate with a master password. I’m happy to use cabled peripherals to achieve this.
Chaining trust from POST to login feels like trying to make a theoretically perfect diamond and titanium bicycle that never wears down or falls apart when all I need is an automated system to tell me when to replace a part that’s about to fail.
Sorry, I wasn’t clear enough. We’re talking about three things here:
(1) Encryption: fast and fantastic, and a must-have for at-rest data protection.
It is vulnerable to password theft though. An attacker might insert evil code between power-on and disk-password-entry. With a locked down BIOS / UEFI, the only way to insert the code is to take the boot drive out of the device, modify it, put it back, and hope no one notices. “Noticing” in this case is done by either:
(2) Trust chaining: verify the signatures of the entire boot process to detect evil code.
(3) Tamper detection: verify the physical integrity of the device.
My point is that (1) is a given, and out of (2) or (3), I’d rather have the latter than deal with the shoddiness of the former
> the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
Reminds me of my old Chromebook Pixel I wiped chromeos from. Every time it booted I had to press Ctrl-L (iirc) to continue the boot, any other keypress would reenable secure boot and the only way I knew to recover from that was to reinstall chromeos, which would wipe my linux partition and my files with it. Needless to say, that computer taught me good backup discipline...
Doing secure boot properly is kind of difficult. There are a bunch of TPM measurement registers for various bits and bobs (kernel, initramfs, cmdline, lots more). Using UKIs simplifies it a lot, but it’s not trivial to do right at the moment.
Same with remote attestation. Not all implementations are actually secure. But hopefully over time those security bugs can be ironed out and the cost to extract a key be made infeasable.
There is the integrity measurement architecture but it isn't very mature in my opinion. Even secureboot and module signing is a manual setup by users, it isn't supported by default, or by installers. You have to more or less manage your own certs and CA, although I did notice some laptops have debian signing keys in UEFI by default? If only the debian installer setup module signing.
But you miss a critical part - Secure Boot, as the name implies is for boot, not OS runtime. Linux I suppose considers the part after initrd load, post-boot perhaps?
I think pid-1 hash verification from the kernel is not a huge ask, as part of secure boot, and leave it to the init system to implement or not implement user-space executable/script signature enforcement. I'm sure Mr. Poettering wouldn't mind.
It is not useless. I'm using UKI, so initrd is built into the kernel binary and signed. I'm not using bootloader, so UEFI checks my kernel signature. My userspace is encrypted and key is stored in TPM, so the whole boot chain is verified.
Isn’t the idea that the kernel will verify anything beneath it. Secure boot verifies the kernel and then it’s in the hands of the kernel to keep verifying or not.
Yes that's the case - my argument is that Linux currently doesn't have anything standardized to do that.
Your best bet for now is to use a read-only dm-verity-protected volume as the root partition, encode its hash in the initrd, combine kernel + initrd into a UKI and sign that.
Standardizing that approach is one thing that the systemd project has been working on. They've built various components to help with that, including writing specifications (via the UAPI group) on how that should all fit together.
ParticleOS[0] gives a look at how this can all fit together, in case you want to see some of it in action.
Yes, you can. I really don't want to be in the business of building OSes. If these guys make it so that getting reasonable boot security is a simple toggle, I'd be grateful.
On arch it isn't particularly difficult to create UKIs other than changing like 2 lines in `mkinitcpio`'s config.
Then there is also `ukify` by systemd which also can create UKIs, which then can be installed with `kernel-install`, but that is a bit more work to set up than for `mkinitcpio`.
The main part is the signing, which I usually have `sbctl` handle.
> A basic setup to make use of secure boot is SB+TPM+LUKS. Unfortunately I don't know of any distro that offers this in a particularly robust way.
Have a look at Ubuntu Core 24 and later. Though it's not exactly a desktop system, but rathe oriented towards embedded/appliances. Recent Ubuntu desktop (from 25.04 IIRC) started getting the same mechanism gradually integrated in each release. Upcoming Ubuntu 26.04 is expected to support TPM backed FDE. Worth a try if you can set up a VM with a software TPM.
Keep in mind though, there's been plenty of issues with various EFI firmwares, especially on the appliances side. EFI specs are apparently treated as guidelines rather than actual specification by whoever ends up implementing the firmware.
Isn't it possible to force TPM measurements for stuff like the kernel command line or initramfs hash to match in order to decrypt the rootfs? Or make things simpler with UKIs?
Most of the firmwares I've used lately seem to allow adding custom secureboot keys.
There is some level of misinformation in your post. Both Windows and Linux check driver signatures. Once you boot Linux in UEFI Secure Boot, you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode. You have to sign all of the drivers within the same PKI of your UEFI key.
> you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode
You don't need to load a driver; you can just replace a binary that's going to be executed as root as part of system boot. This is something a hypothetical code signature verification would detect and prevent.
Failing kernel-level code signature enforcement, the next best step is to have a dm-verity volume as your root partition, with the dm-verity hashes in the initrd within the UKI, and that UKI being signed with secure boot.
This would theoretically allow you to recover from even root-level compromise by just rebooting the machine (assuming the secure boot signing keys weren't on said machine itself).
It's interesting how quickly the OSS movement went from "No, no, we just want to include companies in the Free Software Movement" to "Oh, don't worry, it's ok if companies with shareholders that are not accountable to the community have a complete monopoly on OSS, and decide what direction it takes"
FOSS was imagined as a brotherhood of hackers, sharing code back and forth to build a utopian code commons that provided freedom to build anything. It stayed firmly in the realm of the imaginary because, in the real world, everybody wants somebody else to foot the bill or do the work. Corporations stepped up once they figured out how to profit off of FOSS and everyone else was content to free ride off of the output because it meant they didn't have to lift a finger. The people who actually do the work are naturally in the driver's seat.
This perspective is astonishingly historically ignorant, and ignores how "Open Source Software" was a deliberate political movement to simultaneously neuter the non-company-friendly goals of FOSS while simultaneously providing a competing (and politically distracting) movement that deliberately courted companies.
The Free Software movement was successful enough that by 1997 it was garnering a lot of international community support and manpower. Eric S. Raymond published CatB in response to these successes, partly with a goal of "celebrating its successes" — sendmail, gcc, perl, and Linux were all popular projects with a huge number of collaborators by this point — and partly with a goal of reframing the Free Software movement such that it effectively neuters the political basis (i.e. the four freedoms, etc.) in a company-friendly way. It's very easy to note when reading the book, how it consistently celebrates the successes of Free Software in a company friendly way, deliberately to make it appealing to companies. Often being very explicit about its goals, e.g. "Don't give your workers good bonuses, because research shows that the better a ''hacker'' the less they care about money!".
A year later, internal memos from Microsoft leaked that showed that management were indeed scared shitless about Linux, a movement that they could neither completely Embrace, Extend, and Extinguish, nor practice Fear, Uncertainty, and Doubt on, because the community that built it were too strong, and too dedicated. Management foresaw that it was only a matter until Linux was a very strong competitor — even if that's taken 20 years, they were decently accurate in their fears, and, to be honest, part of why it's taken 30 years for Linux to catch up are deliberate actions by Microsoft wrt. introducing and adopting technologies that would stymie the Free Software movement from being able to adapt.
Remote attestation is another technology that is not inherently restrictive of software freedom. But here are some examples of technologies that have already restricted freedom due to oligopoly combined with network effects:
* smartphone device integrity checks (SafetyNet / Play Integrity / Apple DeviceCheck)
It very clearly is restrictive of software freedom. I've never suffered from an evil maid breaking into my house to access my computer, but I've _very_ frequently suffered from corporations trying to prevent me from doing what I wish with my own things. We need to push back on this notion that this sort of thing was _ever_ for the end-user's benefit, because it's not.
The better question then is why the actual f** can an OTA firmware update touch anything in the steering or powertrain of the car, or why do I even need a computer that's connected to anything, and one which does more than just make sure I get the right amount of fuel and spark, or why on earth do people tolerate this sort of insanity.
If a malicious update can be pushed because of some failure in the signature verification checks (which already exist), what makes you think the threat actor won’t have access to signing keys?
This is not what attestation is even seeking to solve.
It's interesting there's no remote attestation the other way around, making sure the server is not doing something to your data that you didn't approve of.
The authors clearly don’t intend this to happen but that doesn’t matter. Someone else will do it. Maybe this can be stopped with licensing as we tried to stop the SaaS loophole with GPLv3?
I am quite conflicted here. On one hand I understand the need for it (offsite colo servers is the best example). Basic level of evil maid resistance is also a nice to have on personal machines. On the other hand we have all the things you listed.
I personally don't think this product matters all that much for now. These types of tech is not oppressive by itself, only when it is being demanded by an adversary. The ability of the adversary to demand it is a function of how widespread the capability is, and there aren't going to be enough Linux clients for this to start infringing on the rights of the general public just yet.
A bigger concern is all the efforts aimed at imposing integrity checks on platforms like the Web. That will eventually force users to make a choice between being denied essential services and accepting these demands.
I also think AI would substantially curtail the effect of many of these anti-user efforts. For example a bot can be programmed to automate using a secure phone and controlled from a user-controlled device, cheat in games, etc.
> On one hand I understand the need for it (offsite colo servers is the best example).
Great example of proving something to your own organization. Mullvad is probably the most trusted VPN provider and they do this! But this is not a power that should be exposed to regular applications, or we end up with a dystopian future of you are not allowed to use your own computer.
Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Android lets you put your own signed keys in on certain phones. For now.
The banking apps still won't trust them, though.
To add a quote from Lennart himself:
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
Your system will not belong to you anymore. Just as it is with Android.
Banks do this because they have made their own requirement that the mobile device is a trust root that can authenticate the user. There are better, limited-purpose devices that can do this, but they are not popular/ubiquitous like smartphones, so here we are.
The oppressive part of this scheme is that Google's integrity check only passes for _their_ keys, which form a chain of trust through the TEE/TPM, through the bootloader and finally through the system image. Crucially, the only part banks should care about should just be the TEE and some secure storage, but Google provides an easy attestation scheme only for the entire hardware/software environment and not just the secure hardware bit that already lives in your phone and can't be phished.
It would be freaking cool if someone could turn your TPM into a Yubikey and have it be useful for you and your bank without having to verify the entire system firmware, bootloader and operating system.
> This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.
Note that the comment you replied to does not even mention phones. Locked down Secure Boot on UEFI is not uncommon on mobile platforms, such as x86-64 tablets.
I wish the myth of the spec would die at this point.
Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.
Remote attestation requires a great deal of trust... I know this comment is likely to be down-voted, but I can't think of a Lennart Poettering project that didn't try to extend, centralize, and conglomerate Linux with disastrous results in the short term; and less innovation, flexibility, and functionality in the long term. Trading the strength of Unix systems for goal of making them more "Microsoft" like.
Remote attestation requires a great deal of trust, and I simply don't have it when it comes to this leadership team.
systemd solved/improved a bunch of things for linux, but now the plan seems to be to replace package management with image based whole dist a/b swaps. and to have signed unified kernel images.
this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
pottering recently works for Microsoft, they want to turn linux into an appliance just like windows, no longer a general purpose os. the transition is still far from over on windows, but look at android and how the google play services dependency/choke-hold is
im sure ill get many down votes, but despite some hyperbole this is the trajectory
> the plan seems to be to replace package management with image based whole dist a/b swaps
The plan is probably to have that as an alternative for the niche uses where that is appropriate.
This majority of this thread seems to have slid on that slippery slope, and jumped directly to the conclusion where the attestation mechanism will be mandatory on all linux machines in the world and you won't be able to run anything without. Which even if it would be a purpose for amutable as a company, it's unfeasible to do when there's such a breadth of distributions and non corpo affiliated developers out there that would need to cooperate for that to happen.
Nobody says that you will not have alternatives. What people are saying, is that if you're using those alternatives you won't be able to watch videos online, or access your bank account.
That's so far down the slippery slope and with so many other things that need to go wrong that I'm not worried and I'm willing to be the one to get "told you so" if it happens.
Linux is nowadays mostly sponsored by big corporations. They have different goals and different ways to do things. Probably the first 10 years Linux was driven by enthusiasts and therefore it was a lean system. Something like systemd is typical corporate output. Due it its complexity it would have died long before finding adoption. But with enterprise money this is possible. Try to develop for the combo Linux Bluetooth/Audio/dbus: the complexity drives you crazy because all this stuff was made for (and financed by) corporate needs of the automotive industry. Simplicity is never a goal in these big companies.
But then Linux wouldn't be where it is without the business side paying for the developers. There is no such thing as a free lunch...
> this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
Yeah. I'm pretty sure it requires a very specific psychological profile to decide to work on such a user-hostile project while post-fact rationalizing that it's "for good".
All I can say is I'm not surprised that Poettering is involved in such a user-hostile attack on free computing.
P.S: I don't care about the downvotes, you shouldn't either.
Does this guy do anything that is user-friendly and is as per open source ethos of freedom and user control? In all this shit-show of Microsoft shoving AI down the throat of its users, I was happy to be firmly in the Linux camp for many many years. And along come these kind of people to shit on that parade too.
P.S: Upvoted you. I don't care about downvotes either.
It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.
I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:
- Hal Finney's transparent server
- Keylime
- System Transparency
- Project Oak
- Apple Private Cloud Compute
- Moxie's Confer.to
I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.
Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)
Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.
Edit: I've reached out privately by email for next steps, as you requested.
Hi David. Great! I actually wasn't planning on going due to other things, but this is worth re-arranging my schedule a bit. See you later this week. Please email me your contact details.
As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.
Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.
I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)
I'm super far from an expert on this, but it NEEDS reproducible builds, right? You need to start from a known good, trusted state - otherwise you cannot trust any new system states. You also need it for updates.
Well, it comes down to what trust assumptions you're OK with. Reproducible reduces trust in the build environment, but you still need to ensure authenticity of the source somehow. Verified boot, measured boot, repro builds, local/remote attestation, and transparency logging provide different things. Combined they form the possibility of a sort of authentication mechanism between a server and client. However, all of the concepts are useful by themselves.
Why on earth would somebody make a company with one of the the most reviled programmers on earth? Everyone knows that everything he touches turns to shit.
What is the endgame here? Obviously "heightened security" in some kind of sense, but to what end and what mechanisms? What is the scope of the work? Is this work meant to secure forges and upstream development processes via more rigid identity verification, or package manager and userspace-level runtime restrictions like code signing? Will there be a push to integrate this work into distributions, organizations, or the kernel itself? Is hardware within the scope of this work, and to what degree?
The website itself is rather vague in its stated goals and mechanisms.
Personally for me this is interesting because there needs to be a way where a hardware token providing an identity should interact with a device and software combination which would ensure no tampering between the user who owns the identity and the end result of computing is.
A concrete example of that is electronic ballots, which is a topic I often bump heads with the rest of HN about, where a hardware identity token (an electronic ID provided by the state) can be used to participate in official ballots, while both the citizen and the state can have some assurance that there was nothing interceding between them in a malicious way.
I suspect the endgame is confidential computing for distributed systems. If you are running high value workloads like LLMs in untrusted environments you need to verify integrity. Right now guaranteeing that the compute context hasn't been tampered with is still very hard to orchestrate.
No, the endgame is that a small handful of entities or a consortium will effectively "own" Linux because they'll be the only "trusted" systems. Welcome to locked-down "Linux".
You'll be free to run your own Linux, but don't expect it to work outside of niche uses.
Ah, good old remote attestation. Always works out brilliantly.
I have this fond memory of that Notary in Germany who did a remote attestation of me being with him in the same room, voting on a shareholder resolution.
While I was currently traveling on the other side of the planet.
This great concept that totally will not blow up the planet has been proudly brought to you by Ze Germans.
No matter what your intentions are: It WILL be abused and it WILL blow up. Stop this and do something useful.
[While systemd had been a nightmare for years, these days its actually pretty good, especially if you disable the "oh, and it can ALSO create perfect eggs benedict and make you a virgin again while booting up the system!" part of it. So, no bad feelings here. Also, I am German. Also: Insert list of history books here.]
I really hope this would be geared towards clients being able to verify the server state or just general server related usecases, instead of trying to replicate SafetyNet-style corporate dystopia on the desktop.
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
See Android; or, where you no longer own your device, and if the company decides, you no longer own your data or access to it.
I mentioned it somewhere else in the thread, and btw, I'm not affiliated with the company, this is just my charitable interpretation of their intentions: this is not for requiring _every_ consumer linux device to have attestation, but for specific devices that are needed for niche purposes to have a method to use an open OS stack while being capable of attestation.
And if Linux$oft suddenly decides every user's system needs a backdoor or that every system mus automatically phone home with your entire browsing data, then, well, too bad, so sad of course!
Probably obvious from the surnames but this is the first time I've seen a EU company pop up on Hacker News that could be mistaken for a Californian company. Nice to see that ambition.
I understand systemd is controversial, that can be debated endlessly but the executive team and engineering team look very competitive. Will be interesting to see where this goes.
Lennart will be involved with at least three events at FOSDEM on the coming weekend. The talks seem unrelated at first glance but maybe there will be an opportunity to learn more about his new endeavor.
Your computer will come with a signed operating system. If you modify the operating system, your computer will not boot. If you try to install a different operating system, your computer will not boot.
What might you call a sort of Dunbar's number that counts not social links, but rather the number of things to which a person must actively refuse consent to?
All vague hand waving at this point and not much to talk about. We'll have to wait and see what they deliver, how it works and the business model to judge how useful it will be.
I see the use case for servers targeted by malicious actors. A penetration test on an hardened system with secure boot and binary verification would be much harder.
For individuals, IMO the risk mostly come from software they want to run (install script or supply chain attack). So if the end user is in control of what gets signed, I don't see much benefit. Unless you force users to use an app store...
Are you guys hiring? I can emulate a grim smile and have no problem being diabolical if the pay is decent so maybe I am a good fit?
I can also pet goats
Immutability means you can't touch or change some parts of the system without great effort (e.g. macOS SIP).
Atomicity means you can track every change, and every change is so small that it affects only one thing and can be traced, replayed or rolled back. Like it's going from A to B and being able to return back to A (or going to B again) in a determinate manner.
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
The immediate concern seeing this is will the maintainer of systemd use their position to push this on everyone through it like every other extended feature of systemd?
Whatever it is, I hope it doesn't go the usual path of a minimal support, optional support and then being virtually mandatory by means of tight coupling with other subsystems.
Daan here, founding engineer and systemd maintainer.
So we try to make every new feature that might be disruptive optional in systemd and opt-in. Of course we don't always succeed and there will always be differences in opinion.
Also, we're a team of people that started in open source and have done open source for most of our careers. We definitely don't intend to change that at all. Keeping systemd a healthy project will certainly always stay important for me.
Thanks for the answer. Let me ask you something close with a more blunt angle:
Considering most of the tech is already present and shipping in the current systemd, what prevents our systems to become a immutable monolith like macOS or current Android with the flick of a switch?
Or a more grave scenario: What prevents Microsoft from mandating removal of enrollment permissions for user keychains and Secure Boot toggle, hence every Linux distribution has to go through Microsoft's blessing to be bootable?
So adding all of this technology will certainly make it more easy to be used for either good or bad. And it will certainly become possible to build an OS that will be less hackable than your run of the mill Linux distro.
But we will never enforce using any of these features in systemd itself. It will always be up to the distro to enable and configure the system to become an immutable monolith. And I certainly don't think distributions like Fedora or Debian will ever go in that direction.
We don't really have any control over what Microsoft decides to do with Secure Boot. If they decide at one point to make Secure Boot reject any Linux distribution and hardware vendors prevent enrolling user owned keys, we're in just as much trouble as everyone else running Linux will be.
I doubt that will actually happen in practice though.
I would be _shocked_ if, conditional on your project being successful, this _wasn't_ commonly used to lock down computing abilities commonly taken for granted today. And I think you know this.
> What prevents Microsoft from mandating removal of enrollment permissions for user keychains and Secure Boot toggle, hence every Linux distribution has to go through Microsoft's blessing to be bootable?
Why are you buying hardware that Microsoft controls if you're concerned about this?
> What prevents Microsoft from mandating removal of enrollment permissions for user keychains and Secure Boot toggle
Theoretically, nothing. But it's worth pointing out that so far they have actually done the opposite. They currently mandate that hardware vendors must allow you to enroll your own keys. There was a somewhat questionable move recently where they introduced a 'more secure by default' branding in which the 3rd party CA (used e.g. go sign shim for Linux) is disabled by default, but again, they mandated there must be an easy toggle to enable it. I don't begrudge them to much for it, because there have been multiple instances of SB bypass via 3rd party signed binaries.
All of this is to say: this is not a scenario I'm worried about today. Of course this may change down the line.
Also, I know. A few of my colleagues run {open, free, dragonfly}BSD as their daily drivers for more than two decades. Also, we have BSD based systems at a couple of places.
However, as a user of almost all mainstream OSes (at the same time, for different reasons), and planning to include OpenBSD to that roster (taking care of a fleet takes time), I'd love to everyone select the correct tool for their applications and don't throw stones at people who doesn't act like them.
Please remember that we all sit in houses made of glass before throwing things to others.
Oh, also please don't make assumptions about people you don't know.
You could describe Richard Stallman as someone who refuses to use proprietary software because he sees using it as becoming complicit--however indirectly--in a technology ecosystem that violates the values he’s committed to.
"Just don't use X" is in fact a very engaged and principled response. Try again.
If you were not a systemd maintainer and have started this project/company independently targeting systemd, you would have to go through the same process as everyone and I would have expected the systemd maintainers to, look at it objectively and review with healthy skepticism before accepting it. But we cannot rely on that basic checks and balances anymore and that's the most worrying part.
> that might be disruptive optional in systemd
> we don't always succeed and there will always be differences in opinion.
You (including other maintainers) are still the final arbitrator of what's disruptive. The differences of opinion in the past have mostly been settled as "deal with it" and that's the basis of current skepticism.
Systemd upstream has reviewers and maintainers from a bunch of different companies, and some independent: Red Hat, Meta, Microsoft, etc. This isn't changing, we'll continue to work through consensus of maintainers regardless of which company we work at.
Drive encryption is only really securing your data at rest, not while the system is running. Ideally image based systems also use the kernels runtime integrity checking (e.g. dm-verity) to ensure that things are as they are expected to be.
“ensure that things are as they are expected to be” according to who, and for who's benefit? Certainly not the person sitting in front of the computer.
The system owner. Usually that is the same entity that owns the secure boot keys, which can be the person that bought a device or another person if the buyer decides to delegate that responsibility (whether knowingly or unknowingly).
In my case I am talking about myself. I prefer to actually know what is running on my systems and ensure that they are as I expect them to be and not that they may have been modified unbeknownst to me.
I don't think this is right. Usually, the entity that owns secure boot keys is a large tech corporation which paid to install their keys on all new computers.
Unless that malware is able to activate the secure boot feature on a system where it is not enabled, in which case it permanently prevents me from removing the malware.
I think https://0pointer.net/blog/authenticated-boot-and-disk-encryp... is a much better explanation of the motivation behind this straight from the horse's mouth. It does a really good job of motivating the need for this in a way that explains why you as the end user would desire such features.
I just want more trustworthy systems. This particular concept of combining reproducible builds, remote attestation and transparency logs is something I came up with in 2018. My colleagues and I started working on it, took a detour into hardware (tillitis.se) and kind of got stuck on the transparency part (sigsum.org, transparency.dev, witness-network.org).
Then we discovered snapshot.debian.org wasn't feeling well, so that was another (important) detour.
Part of me wish we had focused more on getting System Transparency in its entirety in production at Mullvad. On the other hand I certainly don't regret us creating Tillitis TKey, Sigsum, taking care of Debian Snapshot service, and several other things.
Now, six years later, systemd and other projects have gotten a long way to building several of the things we need for ST. It doesn't make sense to do double work, so I want to seize the moment and make sure we coordinate.
I always wondered how this works in practice for "real time" use cases because we've seen with secure boot + tpm that we can attest that the boot was genuine at some point in the past, what about modifications that can happen after that?
Frankly this disgusts me. While there are technically user-empowering ways this can be used, by far the most prevalent use will be to lock users/customers out of true ownership of their own devices.
Device attestation fails? No streaming video or audio for you (you obvious pirate!).
Device attestation fails? No online gaming for you (you obvious cheater!).
Device attestation fails? No banking for you (you obvious fraudster!).
Device attestation fails? No internet access for you (you obvious dissident!).
Sure, there are some good uses of this, and those good uses will happen, but this sort of tech will be overwhelmingly used for bad.
As per the announcement, we’ll be building this over the next months and sharing more information as this rolls out. Much of the fundamentals can be extracted from Lennart’s posts and the talks from All Systems Go! over the last years.
So much negativity in this thread. I actually think this could be useful, because tamper-proof computer systems are useful to prevent evil maid attacks. Especially in the age of Pegasus and other spyware, we should also take physical attack vectors into account.
I can relate to people being rather hostile to the idea of boot verification, because this is a process that is really low level and also something that we as computer experts rarely interact with more deeply. The most challenging part of installing a Linux system is always installing the boot loader, potentially setting up an UEFI partition. These are things that I don't do everyday and that I don't have deep knowledge in. And if things go wrong, then it is extra hard to fix things. Secure boot makes it even harder to understand what is going on. There is a general lack of knowledge of what is happening behind the scenes and it is really hard to learn about it. I feel that the people behind this project should really keep XKCD 2501 in mind when talking to their fellow computer experts.
this is very interesting... been watching the work around bootc coupling with composefs + dm_verity + signed UKI, I'm wondering if this will build upon that.
I can see like a 100 ways this can make computing worse for 99% people and like 1-2 scenarios where it might actually be useful.
Like if the politicians pushing for chat control/on device scanning of data come knocking again and actually go through (they can try infinitely) tech like this will really be "useful". Oops your device cannot produce a valid attestation, no internet for you.
I thought it was how to plug the user freedom hole. Profits are leaking because users can leave the slop ecosystem and install something that respects their freedom. It's been solved on mobile devices and it needs to be solved for desktops.
it won’t matter if you disable it. You simply won’t be able to use your PC with any commercial services, in the same way that a rooted android installation can’t run banking apps without doing things to break that, and what they’re working on here aims to make that “breakage“ impossible.
Hmph, AFAIK systemd has been struggling with TPM stuff for a while (much longer than I anticipated). It’s kinda understandable that the founder of systemd is joining this attestation business, because attestation ultimately requires far more than a stable OS platform plus an attestation module.
A reliably attestable system has to nail the entire boot chain: BIOS/firmware, bootloader, kernel/initramfs pairs, the `init` process, and the system configuration. Flip a single bit anywhere along the process, and your equipment is now a brick.
Getting all of this right requires deep system knowledge, plus a lot of hair-pulling adjustment, assuming if you still have hair left.
I think this part of Linux has been underrated. TPM is a powerful platform that is universally available, and Linux is the perfect OS to fully utilize it. The need for trust in digital realm will only increase. Who knows, it may even integrate with cryptocurrency or even social platforms. I really wish them a good luck.
Everyday the world is becoming more polarized. Technology corporations gain ever more control over people's lives, telling people what they can do on their computers and phones, what they can talk about on social platforms, censoring what they please, wielding the threat of being cutoff from their data, their social circles on a whim. All over the world, in dictatorships and also in democratic countries, governments turn more fascist and more violent. They demonstrate that they can use technology to oppress their population, to hunt dissent and to efficiently spread propaganda.
In that world, authoring technology that enables this even more is either completely mad or evil. To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom, acceptance of risk. If you build software that actively intends to take this away from me to put it into the hands of economic interests and political actors then you deserve all the hate you can get.
> To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom ...
I use Linux since the Slackware day. Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
This is not about security for the users. It's about control.
At least many in this thread are criticizing the project.
And, once again of course, it's from a private company.
Full of ex-Microsofties.
I don't know why anyone interested in hacking would cheer for this. But then maybe HN should be renamed "CN" (Corporate News) or "MN" (Microsoft News).
> Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
agreed, and now he's planning on controlling what remains of your machine cryptographically!
I'm sure this company is more focused on the enterprise angle, but I wonder if the buildout of support for remote attestation could eventually resolve the Linux gaming vs. anti-cheat stalemate. At least for those willing to use a "blessed" kernel provided by Valve or whoever.
I might be behind on the latest counter-counter-counter-measures, but I know some of the leading AC solutions are already using IOMMU to wedge a firewall between passive DMA sniffers and the game processes memory.
Just an assumption here, but the project appears to be about the methodology to verify the install. Who holds the keys is an entirely different matter.
You. The money quote about the current state of Linux security:
> In fact, right now, your data is probably more secure if stored on current ChromeOS, Android, Windows or MacOS devices, than it is on typical Linux distributions.
Say what you want about systemd the project but they're the only ones moving foundational Linux security forward, no one else even has the ambition to try. The hardening tools they've brought to Linux are so far ahead of everything else it's not even funny.
This is basically propaganda for the war on general purpose computing. My user data is less safe on a Windows device, because Microsoft has full access to that device and they are extremely untrustworthy. On my Linux device, I choose the software to install.
What are you talking about? This has nothing to do with general purpose computing and everything to do with allowing you to authenticate the parts of the Linux boot process that must by necessity be left unencrypted in order to actually boot your computer. This is putting SecureBoot and the TPM to work for your benefit.
It's not propaganda in any sense, it's recognizing that Linux is behind the state of the art compared to Windows/macOS when it comes to preventing tampering with your OS install. It's not saying you should use Windows, it's saying we should improve the Linux boot process to be a tight security-wise as the Windows boot process along with a long explanation of how we get there.
Secure boot is initialized by the first person who physically touches the computer and wants to initialize it. Guess who that is? Hint: it's not the final owner.
It's only secure from evil maker attacks if it can be wiped and reinitialised at any time.
You seem to be under the impression that you cannot reset your Secure Boot to setup mode. You can in the UEFI, doing so wipes any enrolled keys. This, of course assumes you trust the UEFI (and hardware) vendors. But if you don't, you have much bigger problems anyway.
Is it possible someone will eventually build a system that doesn't allow this? Yes. Is this influenced in any way by features of Linux software? No.
It is certainly influenced by the features of Linux software. If Linux does not support this then this preserves a platform as an escape route where this is not possible and this substantially reduces the incentive to provide certain content and services (!) only when this is enabled.
Considering that (for example) your data on ChromeOS is automatically copied to a server run by Google, who are legally compelled to provide a copy to the government when subject to a FISA order, it is unclear what Poettering's threat model is here. Handwringing about secure boot is ludicrous when somebody already has a remote backdoor, which all of the cited operating systems do. Frankly, the assertion of such a naked counterfactual says a lot more about Poettering than it does about Linux security.
rust-vmm-based environment that verifies/authenticates an image before running ? Immutable VM (no FS, root dropper after setting up network, no or curated device), 'micro'-vm based on systemd ? vmm captures running kernel code/memory mapping before handing off to userland, checks periodically it hasn't changed ? Anything else on the state of the art of immutable/integrity-checking of VMs?
And once you remove the friction for requiring cryptographic verification of each component, all it takes is one well-resourced lobby to pass a law either banning user-controlled signing keys outright or relegating them to second-class status. All governments share broadly similar tendencies; the EU and UK govts have always coveted central control over user devices.
I don't like few pieces and Mr. Lennarts attitude to some bugs/obvious flaws, but by far much better than old sysv or really any alternative we have.
Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts
There are genuine positive applications for remote attestation. E.g., if you maintain a set of servers, you can verify that it runs the software it should be running (the software is not compromised). Or if you are running something similar to Apple's Private Compute Cloud to run models, users can verify that it is running the privacy-preserving image that it is claiming to be running.
There are also bad forms of remote attestation (like Google's variant that helps them let banks block you if you are running an alt-os). Those suck and should be rejected.
> Trusted boot is literally a form of DRM. A different one than remote attestation.
No, it's not. (And for that matter, neither is remote attestation)
You're conflating the technology with the use.
I believe that you have only thought about these technologies as they pertain to DRM, now I'm here to tell you there are other valid use cases.
Or maybe your definition of "DRM" is so broad that it includes me setting up my own trusted boot chain on my own hardware? I don't really think that's a productive definition.
there are no other things. The entire point of remote attestation is to manage(i.e. take away) rights of user that runs it, unless you own entire chain, which you do not on any customer device
> Interesting. So what did the attestation say once I (random Internet user) updated the firmware to something I wrote or compiled from another source?
So your device had no user freedom. You're not doing much to refute the notion that these technologies are only useful to severely restrict user freedom for money.
> So your device had no user freedom. You're not doing much to refute the notion that these technologies are only useful to severely restrict user freedom for money.
Would love to hear more of your thoughts on how the users of the device I worked on had their freedom restricted!
I guess my company, the user of the device that I worked on, was being harmed by my company, the creator of the device that I worked on. It's too bad that my company chose to restrict the user's freedom in this way.
Who cares if the application of the device was an industrial control scenario where errors are practically guaranteed to result in the loss of human life, and as a result are incredibly high value targets ala Stuxnet.
No, the users rights to run any code trumps everything! Commercial device or not, ever sold outside of the company or not, terrorist firmware update or not - this right shall not be infringed.
I now recognize I have committed a great sin, and hope you will forgive me.
Hacker News has recently been dominated by conspiracy theorists who believe that all applications of cryptography are evil attempts by shadowy corporate overlords to dominate their use of computing.
Buddy, if I want encryption of my own I've got secure boot, LUKS, GPG, etc. With all of those, why would I need or even want remote attestation? The purpose of that is to assure corporations that their code is running on my computer without me being able to modify it. It's for DRM.
I am fairly confident that this company is going to assure corporations that their own code is running on their own computers (ie - to secure datacenter workloads), to allow _you_ (or auditors) to assure that only _your_ asserted code is also running on their rented computers (to secure cloud workloads), or to assure that the code running on _their_ computers is what they say it is, which is actually pretty cool since it lets you use Somebody Else's Computer with some assurance that they aren't spying on you (see: Apple Private Cloud Compute). Maybe they will also try to use this to assert "deep" embedded devices which already lock the user out, although even this seems less likely given that these devices frequently already have such systems in place.
IMO it's pretty clear that this is a server play because the only place where Linux has enough of a foothold to make client / end-user attestation financially interesting is Android, where it already exists. And to me the server play actually gives me more capabilities than I had: it lets me run my code on cloud provided machines and/or use cloud services with some level of assurance that the provider hasn't backdoored me and my systems haven't been compromised.
How can you be "pretty sure" they're going to develop precisely the technology needed to implement DRM but also will never use or allow it to be used by anybody but the lawful owners of the hardware? You can't.
It's like designing new kinds of nerve gas, "quite sure" that it will only ever be in the hands of good guys who aren't going to hurt people with it. That's powerful naïveté. Once you make it, you can't control who has it and what they use it for. There's no take-backsies, that's why it should never be created in the first place.
The technology needed to implement DRM has been there for 20+ years and has already evolved in the space where it makes sense from an "evil" standpoint (if you're on that particular side of the fence - Android client attestation), so someone implementing the flip side that might actually be useful doesn't particularly bother me. I remember the 1990s "cryptography is the weapon of evil" arguments too - it's funny how the tables have turned, but I still believe that in general these useful technologies can help people overall.
The technology already exists and also there is unmet industrial market demand for the technology. Incoherent. If it already exists as you say, then Lennart should fuck off and find something else to make.
> It's like designing new kinds of nerve gas, "quite sure" that it will only ever be in the hands of good guys who aren't going to hurt people with it. That's powerful naïveté. Once you make it, you can't control who has it and what they use it for. There's no take-backsies, that's why it should never be created in the first place.
Interesting choice of analogy, to compare something with the singular purpose to destroy biological entities, to a computing technology that enforces what code is run.
Can you not see there might be positive, non-destructive applications of the latter? Are you the type of person that argues cars shouldn't exist due to their negative impacts while ignoring all the positives?
I'm not seeing any big problems with the portraits.
Having said that, should this company not be successful, Mr Zbyszek Jędrzejewski-Szmek has potentially a glowing career as an artists' model. Think Rembrandt sketches.
I look forward to something like ChromeOS that you can just install on any old refurbished laptop. But I think the money is in servers.
One of the most grating pain points of the early versions of systemd was a general lack of humility, some would say rank arrogance, displayed by the project lead and his orbiters. Today systemd is in a state of "not great, not terrible" but it was (and in some circles still is) notorious for breaking peoples' linux installs, their workflows, and generally just causing a lot of headaches. The systemd project leads responded mostly with Apple-style "you're holding it wrong" sneers.
It's not immediately clear to me what exactly Amutable will be implementing, but it smells a lot like some sort of DRM, and my immediate reaction is that this is something that Big Tech wants but that users don't.
My question is this: Has Lennart's attitude changed, or can linux users expect more of the same paternalism as some new technology is pushed on us whether we like it or not?
You won't believe how many hours we have lost troubleshooting SysV init and Upstart issues. systemd is so much better in every way, reliable parallel init with dependencies, proper handling of double forking, much easier to secure services (systemd-analyze security), proper timer handling (yay, no more cron), proper temporary file/directory handling, centralized logs, etc.
It improves on about every level compared to what came before. And no, nothing is perfect and you sometimes have to troubleshoot it.
About ten years ago I took a three day cross-country Amtrak trip where I wanted to work on some data analysis that used mysql for its backend. It was a great venue for that sort of work because the lack of train-internet was wonderful to keep me focused. The data I was working with was about 20GB of parking ticket data. The data took a while to process over SQL which gave me the chance to check out the world unfolding outside of the train.
At some point, mysql (well, mariadb) got into a weird state after an unclean shutdown that put itself into recovery mode where upon startup it had to do some disk-intensive cleanup. Thing is -- systemd has a default setting (that's not readily documented, nor sufficiently described in its logs when the behavior happens) that halts the service startup after 30 seconds to try again. On loop.
My troubleshooting attempts were unsuccessful. And since I deleted the original csv files to save disk space, I wasn't able to even poke at the CSV files through python or whatnot.
So instead of doing the analysis I wanted to do on the train, I had to wait until I got to the end of the line to fix it. Sure enough, it was some default 30s timeout that's not explicitly mentioned nor commented out like many services do.
So, saying that things are "much better in every way" really falls on deaf ears and is reminiscent of the systemd devs' dismissive/arrogant behavior that many folk are frustrated about.
I had a situation like that with an undocumented behavior and systemd-tmpfiles. I wanted it to clean up a directory in /var/tmp/ occasionally. The automation using that directory kept breaking, however, because instead of either finding a whole intact git repo to update or a deleted repo, it instead found only a scattering of files that were root-owned with read-only permissions. There was yet another undocumented feature in systemd-tmpfiles where it would ignore root-owned, read-only files regardless of explicit configuration telling it to clean up the contents of those directories. Eventually this feature was quietly removed:
That was far from the only time that the systemd developers decided to just break norms or do weird things because they felt like it, and then poorly communicate that change. Change itself is fine, it's how we progress. But part of that arrogance that you mentioned was always framing people who didn't like capricious or poorly communicated changes as being against progress, and that's always been the most annoying part of the whole thing.
Speaking of systemd-tmpfiles, wasn't there an issue where asking it to clean all temp files would also rm -rf /home and this was closed as wontfix, intended behavior?
How can I cancel a systemd startup task that blocks the login prompt? / how is forcing me to wait for dhcp on a network interface that isn't even plugged in a better experience?
Your distribution has configured your GDM or Getty to have some dependency on something that ultimately waits on dhcpcd/network-online.target.
It’s not really the fault of systemd; it just enables new possibilities that were previously difficult/impossible and now the usage of said possibilities is surfacing problems.
It is the fault of systemd that there's no interactive control.
On other inits, I can hit ctrl-C to break out of a poorly configured setup. Yes, it's more difficult when there's potentially parallelism. But systemd is not uniformly better than everything else when it lacks interactivity.
And it might not be better than everything else if common distributions set it up wrong because it's difficult to set it up right. If we're willing to discount problems related to one init system because the distribution is holding it wrong, then why don't we blame problems with other init systems on distributions or applications, too? There's no need to restart crashing applications if applications don't crash, etc.
There’s a reason why Devuan (a non systemd Debian) exists. Don’t want to get into a massive argument, but there are legitimate reasons for some to go in a different direction.
Gentoo doesn't "exist" because it is necessary to have an alternative to systemd. Gentoo is simply about choice and works with both openrc and systemd. It supported other inits to some degree as well im the past.
After over a decade of Debian, when I upgraded my PC, I tried every big systemd-based distro, including opensuse, which I wholly loathed. I finally decided on Void and feel at home as I did 20+ years ago when I began.
There are serious problems with the systemd paradigm, most of which I couldn't argue for or against. But at least in Void, I can remove network-manger altogether, use cron as I always have, and generally remain free to do as I please until eventually every package there is has systemd dependencies which seems frightfully plausible at this pace.
Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
I'm glad to see the terse questions here. They're well warranted.
systemd parses your crontab and runs the jobs inside on its own terms
of course you can run Cron as well and run all your jobs twice in two different ways, but that's only pedantically possible as it's a completely useless way to do things.
Not stopping. Just clashing with that and a hundred other things that I never wanted managed by one guy. Systemd.timer, systemd.service, yes, trivial, but I don't catalog every thing that bothers me about systemd - I just stay away from it. There are plenty of better examples. So where ever I wrote 'stop', it should read hinder.
> Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
If systemd-less Linux ever go, there are indeed still the BSDs. But I thought long and hard about this and already did some testing: I used to run Xen back in the early hardware-virt days and nowadays I run Proxmox (still, sadly, systemd-based).
An hypervisor with a VM and GPU passthrough to the VM is at least something too: it's going to be a long long while before people who want to take our ability to control our machines will be able to prevent us from running a minimal hypervisor and then the "real" OS in a VM controlled by the hypervisor.
I did GPU passthrough tests and everything works just fine: be it Linux guests (which I use) or Windows guests (which I don't use).
My "path" to dodge the cave you're talking about is going to involved an hypervisor (atm I'm looking at the FreeBSD's bhyve hypervisor) and then a VM running systemd-less Linux.
And seen that, today, we can run just about every old system under the sun in a VM, I take we'll all be long dead before evil people manage to prevent us from running the Linux we want, the way we want.
You're not alone. And we're not alone.
I simply cannot stand the insufferable arrogance of Agent Poettering. Especially not seen the kitchen sink that systemd is (systemd ain't exactly a homerun and many are realizing that fact now).
Here are a few examples of problems systemd has caused me:
System shutdown/reboot is now unreliable. Sometimes it will be just as quick as it was before systemd arrived, but other times, systemd will decide that something isn't to its liking, and block shutdown for somewhere between 30 seconds and 10 minutes, waiting for something that will never happen. The thing in question might be different from one session to the next, and from one systemd version to the next; I can spend hours or days tracking down the process/mount/service in question and finding a workaround, only to have systemd hang on something else the next day. It offers no manual skip option, so unless I happen to be working on a host with systemd's timeouts reconfigured to reduce this problem, I'm stuck with either forcing a power-off or having my time wasted.
Something about systemd's meddling with cgroups broke the lxc control commands a few years back. To work around the problem, I have to replace every such command I use with something like `systemd-run --quiet --user --scope --property=Delegate=yes <command>`. That's a PITA that I'm unlikely to ever remember (or want to type) so I effectively cannot manage containers interactively without helper scripts any more. It's also a new systemd dependency, so those helper scripts now also need checks for cgroup version and systemd presence, and a different code path depending on the result. Making matters worse, that systemd-run command occasionally fails even when I do everything "right". What was once simple and easy is now complex and unreliable.
At some point, Lennart unilaterally decided that all machines accessed over a network must have a domain name. Subsequently, every machine running a distro that had migrated to systemd-resolved was suddenly unable to resolve its hostname-only peers on the LAN, despite the DNS server handling them just fine. Finding the problem, figuring out the cause, and reconfiguring around it wasn't the end of the world, but it did waste more of my time. Repeating that experience once or twice more when systemd behavior changed again and again eventually drove me to a policy of ripping out systemd-resolved entirely on any new installation. (Which, of course, takes more time.) I think this behavior may have been rolled back by now, but sadly, I'll never get my time back.
There are more examples, but I'm tired of re-living them and don't really want to write a book. I hope these few are enough to convey my point:
Systemd has been a net negative in my experience. It has made my life markedly worse, without bringing anything I needed. Based on conversations, comments, and bug reports I've seen over the years, I get the impression that many others have had a similar experience, but don't bother speaking up about it any more, because they're tired of being dismissed, ignored, or shouted down, just as I am.
I would welcome a reliable, minimal, non-invasive, dependency-based init. Systemd is not it.
The problem is not systemd vs SysV et al, the problem is systemd spreading like a cancer throughout the entire operating system.
Also trying to use systemd with podman is frustrating as hell. You just cannot run a system service using podman as a non-root user and have it work correctly.
Quadlet actually solves this. It's the newer way to define containers for systemd and handles the rootless user case properly. I migrated my services to it recently and it's much more robust than the old generate scripts.
Quadlet are great but running podman via systemd as a non root user worked perfectly well before quadlets and I have no idea what your parent is talking about (I'm currently in the process of converting my home services from rootless podman over systemd to quadlet)
Fair, it worked, but podman generate systemd is deprecated now. I found the generated unit files pretty brittle to maintain compared to just having a declarative config that handles the lifecycle.
I agree 100%, I was stuck without quadlet in previous Debian stable so I had to work with systemd generate, but quadlets are undoubtedly better, and I was looking forward to upgrade Debian just for that, and now that I did, I'm really happy to migrate. Especially custom container image management is so much smoother.
Could you give an example system-level quadlet that accepts connections on a low port, like 80, but runs the actual container as a non-root user (and plays nice with systemd, no force kill after timeout to stop, no reporting as failed for a successful stop)?
My understanding is quadlet does not solve this, and my options are calling "systemctl --user" or "--userns auto". I would love to be wrong here.
As an alternative solution to the sibling comment, I do run everything rootless in systemd --user so my services don't have access to privileged ports, and use firewall rules to redirect the external interface low ports, to the local high ports (that sounds annoying but in practice I only redirect a single port - 443 - to traefik and the use it to route to the right container service depending on domain)
I solved the port 80 issue by adding AmbientCapabilities=CAP_NET_BIND_SERVICE to the Service section of the unit file. That lets you bind privileged ports while still defining a User= line to run non-root. The lifecycle management seems solid in my experience, no force kills required.
Trusted computing and remote attestation is like two people who want to have sex requiring clean STD tests first. Either party can refuse and thus no sex will happen. A bank trusting a random rooted smartphone is like having sex with a prostitute with no condom. The anti-attestation position is essentially "I have a right to connect to your service with an unverified system, and refusing me is oppression." Translate that to the STD context and it sounds absurd - "I have a right to have sex with you without testing, and requiring tests violates my bodily autonomy."
You're free to root your phone. You're free to run whatever you want. You're just not entitled to have third parties trust that device with their systems and money. Same as you're free to decline STD testing - you just don't get to then demand unprotected sex from partners who require it.
You are trying to portrait it as an exchange between equal parties which it isn't. I am totally entitled not to have to use a thrid-party-controlled device to access government services. Or my bank account.
remote attestation is just fancy digital signatures with hardware protected secret keys. Are you freaking out about digital signatures used anywhere else?
It absolutely does. Emphasis on use. The last thing I need is my bank requiring me to use a Poettering-certified distribution because anything else is "insecure".
You are acting very entitled thinking you can dictate the conditions under which you can connect to other people's computers. This is a "it takes two to tango" situation. I'm sure YOU would refuse to connect to any bank that refuses to use TLS.
So both consent to sex and now one thinks they're entitled to marriage. That's where this inevitably leads, user/customer lock-in and control.
While the bank use case makes a compelling argument, device attestation won't be used for just banks. It's going to be every god damned thing on the internet. Why? Because why the hell not, it further pushes the costs of doing business of banks/MSPs/email providers/cloud services onto the customer and assigns more of the liabilities.
It will also further the digital divide as there will be zero support for devices that fail attestation at any service requiring it. I used to think that the friction against this technology was overblown, but over the last eighteen months I've come to the conclusion that it is going to be a horrible privacy sucking nightmare wrapped in the gold foil of security.
I've been involved in tech a long, long time. The first thing I'm going to do when I retire is start chucking devices. I'm checking-out, none of this is proving to be worth the financial and privacy costs.
"It's going to be every god damned thing on the internet. Why? Because why the hell not"
This is not a persuasive argument.
You are also ignoring the fact that YOU can use remote attestation to verify remote computers are running what they say they are.
"I've been involved in tech a long, long time. The first thing I'm going to do when I retire is start chucking devices. I'm checking-out, none of this is proving to be worth the financial and privacy costs."
You actually sound like you are having a nervous breakdown. Perhaps you should take a vacation.
Can it sets terms on my religious and political views? I'm not speaking about race and sex, you cannot choose them (ok, sex you could in some jurisdictions, and there is difference between sex and gender, please, don't be nitpicky here), but about things I can choose same as I can choose my hardware and software to run.
If there is real effective market (which is not in any country on Earth, especially for banks), you could say: vote with you money, choose bank which suits you. But it is impossible even with bakery, less with banks on market which is strictly regulated (in part as result of lobbying by established institutions, to protect themselves!).
So, on one hand, I must use banks (I cannot pay for many things in cash, here, where I live most of bars and many shops doesn't accept cash, for example, and it is result of government politics and regulations), and on other hand banks is not seen as essential as access to air and water, they could dictate any terms they want.
You DO understand you can own more than one phone, right? Just use one that isn't rooted as a dedicated banking device and the rooted phone for whatever else you need. You are making life far too hard.
The typical HN rage-posting about DRM aside, there's no reason that remote attestation can't be used in the opposite direction: to assert that a server is running only the exact code stack it claims to be, avoiding backdoors. This can even be used with fully open-source software, creating an opportunity for OSS cloud-hosted services which can guarantee that the OSS and the build running on the server match. This is a really cool opportunity for privacy advocates if leveraged correctly - the idea could be used to build something like Apple's Private Cloud Compute but even more open.
Like evil maid attacks, this is a vanishingly rare scenario brought out to try to justify technology that will overwhelmingly be used to restrict computing freedom.
In addition, the benefit is a bit ridiculous, like that of DRM itself. Even if it worked, literally your "trusted software" is going to be running in an office full of the most advanced crackers money can buy, and with all the incentive to exploit your schema but not publish the fact that they did. The attack surface of the entire thing is so large it boggles the mind that there are people who believe on the "secure computing cloud" scenario.
WHAT is the usage and benefit for private users? This is always neglected.
avoiding backdoors as a private person you always can only solve with having the hardware at your place, because hardware ALWAYS can have backdoors, because hardware vendors do not fix their shit.
From my point of view it ONLY gives control and possibilities to large organizations like governments and companies. which in turn use it to control citizens
> but considering Windows requirements drive the PC spec, this capability can be used to force Linux distributions in bad ways
What do you mean by this?
Is the concern that systemd is suddenly going to require that users enable some kind of attestation functionality? That making attestation possible or easier is going to cause third parties to start requiring it for client machines running Linux? This doesn't even really seem to be a goal; there's not really money to be made there.
As far as I can tell the sales pitch here is literally "we make it so you can assure the machines running in your datacenter are doing what they say they are," which seems pretty nice to me, and the perversions of this to erode user rights are either just as likely as they ever were or incredibly strange edge cases.
Microsoft has a "minimum set of requirements" document about "Designed for Windows" PCs. You can't sell a machine with Windows or tell it's Windows compatible without complying with that checklist.
So, every PC sold to consumers is sanctioned by Microsoft. This list contains Secure Boot and TPM based requirements, too.
If Microsoft decides to eliminate enrollment of user keys and Secure Boot toggle, they can revoke current signing keys for "shims" and force Linux distributions to go full immutable to "sign" their bootloaders so they can boot.
As said above, it's not something Amutable can control, but enable by proxy and by accident.
Look, I work in a datacenter, with a sizeable fleet. Being able to verify that fleet is desirable for some kinds of operations, I understand that. On the other hand, like every double edged sword, this can cut in both ways.
I don't see how this relates in any way to Amutable and it has been a "concern" for 20+ years (which has never come to pass). How do you think this relates at all?
Before this point in time, Linux never supported being an immutable image. Neither filesystems, nor the mechanism to lock it down was there. The best you could do was, TiVoization, but that would be too obvious and won't fly.
Now we have immutable distributions (SuSE, Fedora, NixOS). We have the infrastructure for attestation (systemd's UKI, image based boot, and other immutability features), TPMs and controversially uutils (Which is MIT licensed and has the stated goal to replace all GNU userspace).
You can build an immutable and adversarial userspace where you don't have to share the source, and require every boot and application call to attest. The theoretical thickness of the wall is both much greater and this theoretical state is much easier to achieve.
20 years ago the only barrier was booting. After that everything was free. Now it's possible to boot into a prison where your every ls and cd command can be attested.
> Before this point in time, Linux never supported being an immutable image.
What? As just one example, dm-verity was merged into the mainline kernel 13 years ago. I built immutable, verified Linux systems at least ten years ago, and it was considered old hat by the time I got there.
> The best you could do was, TiVoization, but that would be too obvious and won't fly.
What does this even mean? "TiVoization" is the slang for "you get a device that runs Linux, you get the GPL sources, but you can't flash your own image on the device because you don't own the keys." This is the exact same problem then as it was now and just as "obvious?"
I understand the fears that come from client attestation (certainly, the way it has been used on Android has been majorly detrimental to non-Google ROMs), but, to the Android point, the groundwork has always been there.
I'd be very annoyed if someone showed up and said "we're making a Linux-based browser attestation system that your bank is going to partner on," but nobody has even gone this direction on Windows yet.
> Oh, Rust is memory safe. Good luck finding holes.
I break secure boot systems for a living and I'd say _maybe_ half of the bugs I find relate to memory safety in a way Rust would fix. A lot of systems already use tools which provide very similar safety guarantees to Rust for single threaded code. Systems are definitely getting more secure and I do worry about impenetrable fortresses appearing in the near future, but making this argument kind of undermines credibility in this space IMO.
Yes, I reference Android client attestation in my comments in this thread frequently. I actually see this company as likely to be the flip side of the “bad” client attestation coin; server attestation actually provides a lot of nice properties to end users and providers who wish to provide secure solutions with very limited user downside.
It won't remain "server" attestation. It will become "client" attestation, with the end result that you won't own your own machine anymore, you'll just be paying for a client device upon which you won't control the hardware or software anymore. See any mobile phone at all, anymore.
I don’t see anyone investing in this for general purpose desktop Linux in the state desktop Linux exists today; the harbinger of the Desktop Linux Apocalypse would be web-based Windows attestation (just as Android attestation is eroding alt-OSes) which feels like a viable “threat” but thankfully doesn’t seem to be happening just yet.
I do think this approach might get used for client attestation in gaming, which I actually don’t mind; renting/non-owning a client that lets me play against non cheaters is a pretty good gaming outcome, and needing a secure configuration to run games seems like a fine trade for me (vs for example needing a secure desktop configuration to access my bank, which would be irksome).
The idea is that by protecting boot path you build a platform from which you can attest the content of the application. The goal here is usually that a cloud provider can say “this cryptographic material confirms that we are running the application you sent us and nothing else” or “the cloud application you logged in to matched the one that was audited 1:1 on disk.”
"We are confident we have a very robust path to revenue."
I take it that you are not at this stage able to provide details of the nature of the path to revenue. On what kind of timescale do you envisage being able to disclose your revenue stream/subscribers/investors?
As I understand it, the main customers for this sort of thing are companies making Tivo-style products - where they want to use Linux in their product, but they want to lock it down so it can't be modified by the device owner.
This can be pretty profitable; once your customers have rolled out a fleet of hardware locked down to only run kernels you've signed.
Ever seen a default ubuntu splash screen/wallpaper on a train, coffee machine, airport terminal kiosk, bus, or other big piece of slow moving, appliance-y thing?
That is why Ubuntu Core (and similar) exist. More secure, better update strategy, lower net cost. I don't agree with the licensing or pricing model, but there are perfectly good technical reasons to use it.
Not if the end user is an operator of safety critical equipment, such as rail or pro audio or any of a number of industries where stability and reproducibility is essential to the product.
Appreciate the clarification, but this actually raises more questions than it answers.
A "robust path to revenue" plus a Linux-based OS and a strong emphasis on EU / German positioning immediately triggers some concern. We've seen this pattern before: wrap a commercially motivated control layer in the language of sovereignty, security, or European tech independence, and hope that policymakers, enterprises, and users don't look too closely at the tradeoffs.
Europe absolutely needs stronger participation in foundational tech, but that shouldn't mean recreating the same centralized trust and control models that already failed elsewhere, just with an EU flag on top. 'European sovereignty' is not inherently better if it still results in third-party gatekeepers deciding what hardware, kernels, or systems are "trusted."
Given Europe's history with regulation-heavy, vendor-driven solutions, it's fair to ask:
Who ultimately controls the trust roots?
Who decides policy when commercial or political pressure appears?
What happens when user interests diverge from business or state interests?
Linux succeeded precisely because it avoided these dynamics. Attestation mechanisms that are tightly coupled to revenue models and geopolitical branding risk undermining that success, regardless of whether the company is based in Silicon Valley or Berlin.
Hopefully this is genuinely about user-verifiable security and not another marketing-driven attempt to position control as sovereignty. Healthy skepticism seems warranted until the governance and trust model are made very explicit.
That's a proxy metric for what we really care about: acceptance of differences, tolerance of others, diversity of perspectives, etc. In principle, you can achieve these goals with a team whose members are all one ethnicity and gender; it's just that a fair selection process won't produce such a team often. And, as it turns out, optimising for the "people who look different" proxy metric doesn't do a terrible job of optimising for the true metric, provided the "cultural fit"-type selection effects are weak enough.
The systemd crowd are perhaps worse than GNOME, as regards "my way or the highway", and designing systems that are fundamentally inadequate for the general use-case. I don't think ethnicity or gender diversity quotas would substantially improve their decision-making: all it would really achieve is to make it harder to spot the homogeneity in a photograph. A truly diverse team wouldn't make the decisions they make.
This is relevant. Every project he's worked on has been a dumpster fire. systemd sucks. PulseAudio sucks. GNOME sucks. Must the GP list out all the ways in which they suck to make it a more objective attack?
This is not about the person being attacked, it's about what this kind of thing does to us as a community. It's not what the site is for, and destroys what it is for.
My comment was not a personal attack. But I can rephrase it if you want it more in the spirit of the guidelines. Here we go:
I'm interested in what Amutable is building, but I'm personally uneasy about Lennart Poettering being involved. This isn't about denying his technical ability or past impact. My concern is more about the social/maintenance dynamics that have repeatedly shown up around some of the projects he's led in the Linux ecosystem - highly centralizing designs, big changes quickly landing in core technology, and the kind of communication/governance style that at times left downstream maintainers and parts of the community feeling steamrolled rather than brought along. I've watched enough of those cycles to be wary when the same leadership style shows up again, especially in something that might become infrastructure people depend on.
To keep this constructive: for folks who've followed his work more closely than I have, do you think those past community frictions were mostly a function of the environment (big distro politics, legacy constraints, etc), or are they intrinsic to how he approaches projects? And for people evaluating Amutable today, what signals would you look for to distinguish "strong technical leadership" from "future maintenance and ecosystem headaches" ?
If anyone from the company is reading, I'd be genuinely reassured by specifics like:
- a clear governance/decision process (who can say "no", how major changes are reviewed)
- a commitment to compatibility and migration paths (not just "it's better, switch")
- transparent security and disclosure practices
- a plan for collaboration with downstream parties and competitors (standards, APIs, interop)
I realize this is partly subjective. I’m posting because I expect I'm not the only one weighing "technical upside" against "community cost," and I'd like to hear how others are thinking about it.
If you don't think that's a community opinion, it's at least an AI's opinion, since all I prompted it with was "rewrite my comment to follow the HN guidelines"
Shall it be backdoorable like systemd-enabled distro nearly had a backdoorable SSH? For non-systemd distro weren't affected.
Why should we trust microsofties to produce something secure and non-backdoored?
And, lastly, why should Linux's security be tied to a private company? Oooh, but it's of course not about security: it's about things like DRM.
I hope Linus doesn't get blinded here: systemd managed to get PID 1 on many distros but they thankfully didn't manage, yet, to control the kernel. I hope this project ain't the final straw to finally meddle into the kernel.
Currently I'm doing:
Proxmox / systemd-less VMs / containers
But Promox is Debian based and Debian really drank too much of the systemd koolaid.
So my plan is:
FreeBSD / bhyve hypervisor / systemd-less Linux VMs / containers
And then I'll be, at long last, systemd-free again.
This project is an attack on general-purpose computing.
People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.
Imagine you're using a program hosted on some cloud service S. You send packets over the network; gears churn; you get some results back. What are the problems with such a service? You have no idea what S is doing with your data. You incur latency, transmission time, and complexity costs using S remotely. You pay, one way or another, for the infrastructure running S. You can't use S offline.
Now imagine instead of S running on somebody else's computer over a network, you run S on your computer instead. Now, you can interact with S with zero latency, don't have to pay for S's infrastructure, and you can supervise S's interaction with the outside world.
But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
However --- if S's author could run S on your computer in such a way that he could prove you haven't tampered with S or haven't observed its secrets, he can let you run S on your computer without giving up control over S. Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
Sure, in this deployment model, just like in the cloud world, you wouldn't be able to run a custom S: but so what? You don't get to run your custom S either way, and this way, relative to cloud deployment, you get better performance and even a little bit more control.
Also, the same thing works in reverse. You get to run your code remotely in a such a way that you can trust its remote execution just as much as you can trust that code executing on your own machine. There are tons of applications for this capability that we're not even imagining because, since the dawn of time, we've equated locality with trust and can now, in principle, decouple the two.
Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
> People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.
But it won't be used like that. It will be used to take user freedoms out.
> But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
That use case you're describing is already there and is currently being done with DRM, either in browser or in app itself.
You are right in the "it will make easier for app user to do it", and in theory it is still better option in video games than kernel anti-cheat. But it is still limiting user freedoms.
> Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
Majority of the uses will be user-hostile things. Because those are only cases where someone will decide to fund it.
> Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
To be honest, mainly companies need that. personal users do not need that. And additionally companies are NOT restrained by governments not to exploit customers as much as possible.
So... i also see it as enslaving users. And tell me, for many private persons, where does this actually give them for PRIVATE persons, NOT companies a net benefit?
> This potential shouldn't prevent our inventing new kinds of tool.
Why do i see someone who wants to build an atomic bomb for shit and giggles using this argument, too? As hyperbole as my argument is, the argument given is not good here, as well.
The immutable linux people build tools, without building good tools which actually make it easier for private people at home to adapt a immutable linux to THEIR liking.
The atomic bomb is good example of what I'm talking about. The reason we haven't had a world war in 80 years is the atomic bomb. Far from being an instrument of misery, it's given us an age of unprecedented peace and prosperity. Plus, all the anti-nuclear activism in the world hasn't come one step closer to banishing nuclear weapons from the earth.
In my personal philosophy, it is never bad to develop a new technology.
I will put some trust into these people if they make this a pure nonprofit organization at the minimum. Building ON measures to ensure that this will not be pushed for the most obvious cases, which is to fight user freedom. This shouldn't be some afterthought.
"Trust us" is never a good idea with profit seeking founders. Especially ones who come from a culture that generally hates the hacker spirit and general computing.
You basically wrote a whole narrative of things that could be. But the team is not even willing to make promises as big as yours. Their answers were essentially just "trust us we're cool guys" and "don't worry, money will work out" wrapped in average PR speak.
Sure, there are sensible things that could be done with this. But given the background of the people involved, the fact that this is yet another clear profit-first gathering makes me incredibly pessimistic.
This pessimism is made worse by reading the answers of the founders here in this thread: typical corporate talk. And most importantly: preventing the very real dangers involved is clearly not a main goal, but is instead brushed off with empty platitudes like "I've been a FOSS guy my entire adult life...." instead of describing or considering actual preventive measures. And even if the claim was true, the founders had a real love for the hacker spirit, there is obviously nothing stopping them from selling to the usual suspects and golden parachute out.
I was really struggling to not make this comment just another snarky, sarcastic comment, but it is exhausting. It is exhausting to see the hatred some have for people just owning their hardware. So sorry, "don't worry, we're your friends" just doesn't cut it to come at this with a positive attitude.
The benefits are few, the potential to do a lot of harm is large. And the people involved clearly have the network and connections to make this an instrument of user-hostility.
Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.
Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.
It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.
But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.
"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.
If the bank can't be bothered to either implement support for U2F or else clearly articulate why U2F isn't sufficient then they don't have a valid position. Anything else they say on the matter should be disregarded.
In this way, the banks are asserting control over your device. It's beyond authentication, they are saying "If you have full control over your device, you cannot access our services."
I'll agree with you that they don't have a valid position, because I can just as easily open up a web browser on said rooted device and access just fine via the web, but how long until services move away from web interfaces in favor of apps instead to assert more control?
Keep in mind that the businesses pushing this stuff still don't support U2F by and large. When I can go down in person to enroll a hardware token I might maybe consider listening to what they have to say on the subject. Maybe. (But probably not.)
Citation needed. The fact that the infosec industry just keeps growing YoY kinda suggests that there are in fact issues that are more expensive than paying the security companies.
This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].
These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.
I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".
[0] https://globalnews.ca/news/11026906/music-industry-limewire-...
The integrity of a system being verified/verifiable doesn't imply that the owner of the system doesn't get to control it.
This sort of e2e attestation seems really useful for enterprise or public infrastructure. Like, it'd be great to know that the ATMs or transit systems in my city had this level of system integrity.
You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.
They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.
This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.
Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.
People dislike cash for some strange reason, then complain about tracking. People also hand out their mobile number like candy. Same issue.
Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.
AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.https://educatedguesswork.org/posts/private-access-tokens/
It seems apple has a service, with an easily rotated key and an agreement with providers. If the key _Apple_ uses is compromised, they can rotate it.
BUT, apple knows _EXACTLY_ who I am. I attest to them using my hardware, they know _EXACTLY_ which hardware I'm using. They can ban me or my hardware. They then their centralized service gives me a blind token. But apple, may, still know exactly who owns which blind tokens.
However, I cannot generate blind tokens on my own. I _MUST_ talk to some centralized service that can I identify me. If that is not the case, then any single compromised device can generate infinite blind tokens rending all the tokens useless.
It would be sufficient to be able to freely choose who you trust as proxy for your attestations *and* the ability to modify that choice at any point later (i.e. there should be some interoperability). That can be your Google/Apple/Samsung ecosystem, your local government, a company operating in whatever jurisdiction you are comfortable with, etc.
I.e. from when they buy from a trusted source and init the device.
Or why buy used devices if this is a risk?
Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.
Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.
Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.
Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.
It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing
There's not really an equivalent here for a computer owned by an individual because it's totally normal for someone to sell or dispose of a computer, and no one expects someone to be responsible for who else might get their hands on it at that point. If you prove a criminal owns a computer that I owned before, then what? Prosecution for failing to protect my computer from thieves, or for reselling it, or gifting it to a neighbor or family friend? Shifting the trust doesn't matter if what gets exposed isn't actually damaging on any way, and that's what the parent comment is asking about.
The first two examples you give seem to be about an unscrupulous government punishing someone for owning a computer that they consider tainted, but it honestly doesn't seem that believable that a government who would do that would require a burden of proof so high as to require cryptographic attestation to decide on something like that. I don't have a rebuttal for "an organization trying to avoid cross-tenant linkage" though because I'm not sure I even understand what it means: an example would probably be helpful.
Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).
I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.
To anyone thinking not possibile, we already switched inits to systemd. And being persnickety saw mariadb replace mysql everywhere, libreoffice replace open office, and so on.
All the recent pushiness by a certain zealotish Italian debian maintainer, only helps this case. Trying to degrade Debian into a clone of Redhat is uncooth.
This misunderstands why systemd succeeded. It included several design decisions aimed at easing distribution maintainers' burdens, thus making adoption attractive to the same people that would approve this adoption.
If a systemd fork differentiates on not having attestation and getting rid of an unspecified set of "all the silly parts", how would they entice distro maintainers to adopt it? Elaborating what is meant by "silly parts" would be needed to answer that question.
Like John Deere. Read about how they use that sort of thing
Look at all the kernel patch submissions. 90% are not users but big tech drones. Look at the Linux foundation board. It's the who's who of big tech.
This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial, the BSDs started commercial but are hardly still used as such and are mostly user driven now (yes there's a few exceptions like netflix, netgate, ix etc but nothing on the scale of huawei, Amazon etc)
Open source operating systems are not a zero sum game. Yes there is a certain gravitational pull from all the work contributed by the big companies. If you aren't contributing "for-hire", then you choose what you want to work on, and what you want to use.
A lot more programs are available for linux, drivers and subsystems have gotten better, more features that benefit everyone (such as eBPF) and more
Thanks, this may be the key takeaway from this discussion for me
This «Linux have a finger in every pie» attitude is very harmful for industry, IMHO.
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
I would love to use that technology to do reverse attestation, and require the server that handles my personal data to behave a certain way, like obeying the privacy policy terms of the EULA and not using my data to train LLMs if I so opted out. Something tells me that's not going to happen...
Centralized trust Hardware attestation run by third parties creates a single point of trust (and failure). If one vendor controls what’s “trusted,” Linux loses one of its core properties: decentralization. This is a fundamental shift in the threat model.
Misaligned incentives These companies don’t just care about security. They have financial, legal, and political incentives. Over time, that usually means monetization, compliance pressure, and policy enforcement creeping into what started as a “security feature.”
Black boxes Most attestation systems are opaque. Users can’t easily audit what’s being measured, what data is emitted, or how decisions are made. This runs counter to the open, inspectable nature of Linux security today.
Expanded attack surface Adding external hardware, firmware, and vendor services increases complexity and creates new supply-chain and implementation risks. If the attestation authority is compromised, the blast radius is massive.
Loss of user control Once attestation becomes required (or “strongly encouraged”), users lose the ability to fully control their own systems. Custom kernels, experimental builds, or unconventional setups risk being treated as “untrusted” by default.
Vendor lock-in Proprietary attestation stacks make switching vendors difficult. If a company disappears, changes terms, or decides your setup is unsupported, you’re stuck. Fragmentation across vendors also becomes likely.
Privacy and tracking Remote attestation often involves sending unique or semi-unique device signals to external services. Even if not intended for tracking, the capability is there—and history shows it eventually gets used.
Potential for abuse Attestation enables blacklisting. Whether for business, legal, or political reasons, third parties gain the power to decide what software or hardware is acceptable. That’s a dangerous lever to hand over.
Harder incident response If something goes wrong inside a proprietary attestation system, users and distro maintainers may have little visibility or ability to respond independently.
Then the user can put their own key there (if say corporate policies demand it), but there is no 3rd party that can decide what the device can do.
But having 3rd party (and US one too!) that is root of all trust is a massive problem.
The giveaway is that LLMs love bulleted lists with a bolded attention-grabbing phrase to start each line. Copy-pasting directly to HN has stripped the bold formatting and bullets from the list, so the attention-grabbing phrase is fused into the next sentence, e.g. “Potential for abuse Attestation enables blacklisting”
It's also because content companies and banks want other people in suits to trust.
I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
History is pretty consistent here:
WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.
Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.
Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.
Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.
Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.
Common failure modes:
Enterprise customers demand guarantees - policy creeps in.
Governments demand compliance - exceptions appear.
Liability enters the picture - defaults shift to "safe for the company."
Revenue depends on trust decisions - neutrality erodes.
Core maintainers lose leverage - architecture hardens around control.
Even if keys are user-controlled today, the key question is architectural: Can this system resist those pressures long-term, or does it merely promise to?
Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.
I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.
I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.
But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.
So, what’s the point and what’s the value?
Of course you can already do the above with secure boot coupled with a CPU that implements an fTPM. So I can't speak to the value of this project specifically, only build and boot integrity in general. For example I have no idea what they mean by the bullet "runtime integrity".
This is for example dm-verity (e.g. `/usr/` is an erofs partiton with matching dm-verity). Lennart always talks about either having files be RW (backed by encryption) or RX (backed by kernel signature verification).
And that’s why I said “not a security mechanism”. Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.
Given secure boot and a TPM you can remotely attest, using your own keys, that the system booted up to a known good state. What exactly that means though depends entirely on what you configured the image to contain.
> it won’t protect from malicious updates to configuration files
It will if you include the verified correct state of the relevant config file in a merkel tree.
> It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware).
Shouldn't it permit running arbitrary binaries that you have signed? That places the root of trust with the build environment.
Now if you attempt to compile binaries and then sign them on the production system yeah that would open you up to attack (if we assume a process has been compromised at runtime). But wasn't that already the case? Ideally the production system should never be used to sign anything. (Some combination of SGX, TPM, and SEV might be an exception to that but I don't know enough to say.)
> Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.
If you remotely boot a box sitting in a rack on the other side of the world how can you be sure it hasn't been compromised? However you go about confirming it, isn't that what attestation is?
Production servers are a whole different story - it's usually not my hardware to begin with. But given how things are mostly immutable those days (shipped as images rather than installed the old-fashioned sysadmin way), I'm not really sure what to think of it...
https://doc.qubes-os.org/en/latest/user/security-in-qubes/an... For laptops, it helps make tampering obvious. (a different attestation scheme with smaller scope however)
This might not be useful to you personally, however.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.
But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.
A rational and intelligent engineer cannot possibly believe that he'll be able to control what a technology is used for after he creates it, unless his salary depends on him not understanding it.
Argument should be technical.
Yes. You correctly stated the important point.
Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.
it will be railroaded through in the same way that systemD was railroaded onto us.
The road to hell is paved with good intentions.
If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.
It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.
PE or EIT?
Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.
One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.
The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.
One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.
Even those not misusing the tech may find themselves worse off in such a world.
Why again should we trust this technology just because you are a good person?
- your bank won't let you log in from an "insecure" device.
- you won't be able to play videos on an "insecure" device.
- you won't be able to play video games on an "insecure" device.
And so on, and so forth.
The attestation portion of those systems is happening on locked down devices, and if you gain ownership of the devices they no longer attest themselves.
This is the curse of the duopoly of iOS and Android.
BankID in Sweden will only run with one of these devices, they used to offer a card system but getting one seems to be impossible these days. So you're really stuck with a mobile device as your primary means of identification for banking and such.
There's a reason that general purpose computers are locked to 720p on Netflix and Disney+; yet AppleTV's are not.
Now, if you want to use your phone as a debit/credit card substitute that is different (Google Pay cares, and I don't use it thus).
Anyway, why should banking apps care? It is not like they care when I use the bank from Firefox on my Linux laptop.
"Any bank"? Although the bank I use locks NFC payments behind such checks (which is not a big loss since a physical debit card offers the same functionality), anything else still works otherwise. Most of the things are available through the website (which fits well on mobile too), and mobile BLIK payments can be done from the Android app which works inside Waydroid with microG.
There's no reason other banks can't work the same way and it's outraging when they don't. Look around for a better bank.
The way you defeat things like that is through political maneuvering and guile rather than submission to their artificial narrative. Publish your own papers and documentation that recommends apps not support any device with that feature or require it to be off because it allows malware to use the feature to evade malware scans, etc. Or point out that it prevents devices with known vulnerabilities from being updated to third party firmware with the patch because the OEM stopped issuing patches but the more secure third party firmware can't sign an attestation, i.e. the device that can do the attestation is vulnerable and the device that can't is patched.
The way you break the duopoly is by getting open platforms that refuse to support it to have enough market share that they can't ignore it. And you have to solve that problem before they would bother supporting your system even if you did implement the treachery. Meanwhile implementing it makes your network effect smaller because then it only applies to the devices and configurations authorized to support it instead of every device that would permissionlessly and independently support ordinary open protocols with published specifications and no gatekeepers.
Another point is (often )the apps that banks makes are 3rd party developed by outsourcing (even if within the same developed country). If someone uses some MiTM or logcat to see some traffic and publishes it then banks get bad publicity. So to prevent this the banks, devs tell anything that is not normal (i.e) non-stock ROM is bad.
FOSS is also something many app-based software devs don't like on their products. While people in cloud, infra like it the app devs like these tools while developing or building a company but not when making end resulting apps.
It doesn't really provide any security.
On top of that, there are tons of devices that can pass attestation that have known vulnerabilities, so the attacker could just use one of those (or extract the keys from it) if they had any reason to. But in the mobile banking threat model they don't actually need to.
I think ever compute professional needs to spend at least a year trying to secure a random companies windows network to appreciate how impossible this actually is without hardware based roots of trust like TPMs and HSMs
It's not. Mobile applications just don't have unrestricted access to everything in your user directory, attestation have nothing to do with it.
Even if you stopped supporting desktops, then they would just reverse engineer the mobile app instead of the web app and extract the attestation keys from any unpatched model of phone and still run their code on a server, and then it would show up as "mobile fraud" because they're pretending to be a phone instead of a desktop, when in reality it was always a server rather than a phone or a desktop.
And even if attestation actually worked (which it doesn't), that still wouldn't prevent fraud, because it only tries to prove that the person requesting the transfer is using a commercial device. If the user's device is compromised then it doesn't matter if it can pass attestation because the attacker is only running the fake, credential stealing "bank app" on the user's device, not the real bank app. Then they can run the official bank app on an official device and use the stolen credentials to transfer the money. The attestation buys you nothing.
Also IIRC, linux foundation etc are not interested in doing such standardisations.
However the problem is that A LOT of things only work with the mobile app.
Of course its all nonsense make believe, the "trust root" is literally a Microsoft signed stub. For this dummy implementation you can't modify your own kernel anymore.
Finances, just pay everything by cheque or physical pennies. Fight back. Starve the tyrants to death where you can, force the tyrants to incur additional costs and inefficiencies where you can't.
the anti-user attestation will at least be full of security holes, and likely won't work at all
It took us nearly a decade and a half to unfuck the pulseaudio situation and finally arrive at a simple solution (pipewire).
SystemD has a lot more people refining it down but a clean (under the hood) implementation probably won't be witnessed in my lifetime.
don't get me wrong, i use pipewire all day every day, and wrote one of the APIs (JACK) that it implements (pretty well, too!).
but pipewire is an order of magnitude more complex than pulseaudio.
My 0.02 bits.
for systemd, I don't think I have a single linux system that boots/reboots reliably 100% of the time these days
What set systemd apart is the collection of tightly integrated utilities such as a dns resolver, sntp client, core dump handler, rpc-like api linking to complex libraries in the hot path and so on and so forth that has been a constant stream of security exploits for over a decade now.
This is a case where the critics were proven to be right. Complexity increases the cognitive burden.
I think he will succeed and we will be worse off, collectively.
For example, the part of systemd that fills DNS will put them in random order (like actual random, not "code happened to dump it in map order)
The previous, while very much NOT perfect, system, put the DNSes in order of one in latest interface, which had useful side-feature that if your VPN had different set of DNSes, it got added in front
The systemd one just randomizes it ( https://github.com/systemd/systemd/issues/27543 ) which means that using standard openvpn wrapper script for it will need to be reran sometimes few times to "roll" the right address, I pretty much have to run
half of the time I connect to company's VPNThe OTHER problem is pervasive NIH in codebase.
Like, they decided to use binary log format. Okay, I can see advantages, it can be indexed or sharded for faster access to app's files...
oh wait it isn't, if you want to get last few lines of a service the worst case is "mmap every single journal file for hundreds of MBs of reads"
It can be optimized so some long but constant fields like bootid are not repeated...
oh wait it doesn't do that either, is massively verbose. I guess I can understand it, at least that would make it less crash-proof...
oh wait no, after crash it just spams logs that previous log file is corrupted and it won't be used.
So we have a log format that only systemd tools can read, takes few times as much space per line as text or even JSON version would, and it still craps out on unclean shutdown
They could've just integrated SQLite. Hell I literally made a lil prototype that took journalctl logs and wrote it to indexed SQLite file and it was not only faster but smaller (as there is no need to write bootid with each line, and log lines can be sharded or indexed so lookup is faster). But nah, Mr. Poettering always wanted to make a binary log format so he did.
The people who had no issues with Pulseaudio; used a mainstream distribution. Those distributions did the heavy lifting of making sure stuff fit together in a cohesive way.
SystemD is very opinionated, so you'd assume it wouldn't have the same results, but it does.. if you use a popular distro then they've done a lot of the hard work that makes systemd function smooth.
I was today years old when I realised this is true for both bits of poetter-ware. Weird.
pulseaudio I had to fight every single day, with my "exotic" setup of one set of speakers and a headset
with pipewire, I've never had to even touch it
systemd: yesterday I had a network service on one machine not start up because the IP it was trying to bind to wasn't available yet
the dependencies for the .service file didn't/can't express the networking semantics correctly
this isn't some hacked up .service file I made, it's that from an extremely popular package from a very popular distro
(yeah I know, use a socket activated service......... more tight coupling to the garbage software)
the day before that I had a service fail to start because the wall clock was shifted by systemd-timesyncd during startup, and then the startup timeout fired because the clock advanced more than the timeout
then the week before that I had a load of stuff start before the time was synced, because chrony has some weird interactions with time-sync.target
it's literally a new random problem every other boot because of this non-deterministic startup, which was never a problem with traditional init or /etc/rc
for what? to save maybe a second of boot time
if the distro maintainers don't understand the systemd dependency model after a decade then it's unfit for purpose
This gave me a good chuckle. Systemd literally was created to solve the awful race conditions and non-determinism in other init systems. And it has done a tremendous job at it. Hence the litany of options to ensure correct order and execution: https://www.freedesktop.org/software/systemd/man/latest/syst...
And outside of esoteric setups I haven't ever encountered the problems you mentioned with service files.
runit's approach is to just keep trying to start the shell script every 2 seconds until it works. One of those worse–is–better ideas, it's really dumb, and effective. You can check for arbitrary conditions and error–exit, and it will keep trying. If you need the time synced you can just make your script fail if the time is not synced.
traditional inittab is older than that and there's not any reason to use it when you could be using runit, really.
like "at least one real IP address is available" or "time has been synced"
and it's not esoteric, even ListenAddress with sshd doesn't even work reliably
the ONLY piece of systemd I've not had problems with is systemd-boot, and then it turned out they didn't write that
"network-online.target is a target that actively waits until the network is “up”, where the definition of “up” is defined by the network management software. Usually it indicates a configured, routable IP address of some kind. Its primary purpose is to actively delay activation of services until the network has been set up."
For time sync checks, I assume one of the targets available will effectively mean a time sync has happened. Or you can do something with ExecStartPre. You could run a shell command that checks for the most recent time sync or forces one.
this service (untouched by me) had:
After=local-fs.target network-online.target remote-fs.target time-sync.target
but it was still started without an IP address, and then failed to bind
just like this sort of problem: https://github.com/systemd/systemd/issues/4880#issuecomment-...
the entire thing is unreliable and doesn't act like you'd expect
> Or you can do something with ExecStartPre. You could run a shell command that checks for the most recent time sync or forces one.
at that point I might as well go back to init=/etc/rc
You could also try targeting NetworkManager or networkd's "wait-online" services. Or if that doesn't work, something is telling systemd that you have an IP when you don't. NetworkManager has "ipv4.may-fail" and "ipv6.may-fail" that might be errenously true.
> at that point I might as well go back to init=/etc/rc
The difference is that systemd is much better at ensuring correctness. If you write the invoked shell command properly, it'll communicate failure or success correct and systemd will then communicate that state to the unit. It's still a lot more robust than before.
the problem is systemd
> The difference is that systemd is much better at ensuring correctness.
yeah, whatever mate
There is so much granularity and flexibility in what you can do it seems rather unlikely you cannot make it happen correctly. And if it is truly a bug... open an issue? They're rather responsive to it. And it isn't like the legacy init systems were bug free from inception (well, lord knows they were still chock full of bugs even when they were replaced).
Edit: sitting here with a grin .. HN downvoting the advice of checking logs, debugging and opening an issue. I wish the companies y'all work at good luck.. they'll need it.
I'm not a systemd hater or anything, but I continue to read stuff from Poettering which to me is deeply disturbing given the programs he works on.
Saying it's not a bug that service is launched despite a stated required prerequisite dependency failed... WTF?
Sure, I agree with him that most computers should probably boot despite NTP being unable to sync. But proposing that the solution to that is breaking Requires is just wild to me.
The question in that issue is around the semantics of time-sync.target. Targets are synchronization points for the system and don't (afaik) generally make promises about the units that are ordered before them (in this case chrony-wait.service.
Does that answer your specific objection of "proposing that the solution to that is breaking Requires is just wild to me"? Basically, what is proposed in that issue is not breaking Requires=. The proposition is that the user add their own, specific Requires= as a drop-in configuration since that's not a generally-applicable default.
Targets[1]: Target units do not offer any additional functionality on top of the generic functionality provided by units. They merely group units, allowing a single target name to be used in Wants= and Requires= settings to establish a dependency on a set of units defined by the target, and in Before= and After= settings to establish ordering.
boot-complete.target[2]: Order units that shall only run when the boot process is considered successful after the target unit and pull in the target from it, also with Requires=.
Note use of "only run" with a reference to Requires=.
time-sync.target[3]: This target provides stricter clock accuracy guarantees than time-set.target (see above), but likely requires network communication and thus introduces unpredictable delays. Services that require clock accuracy and where network communication delays are acceptable should use this target.
Especially note the last sentence there.
The documentation clearly indicates that targets can fail, and that services that needs the target to be successful, should use Requires= to specify that.
If the above is not true, the Requires= and Targets documentation should be rewritten to explicitly say that targets might fulfill Requires= regardless of state. Also, the documentation for time-sync.target should explicitly remove the last sentence and instead state there is no functional difference between Requires=time-sync.target and Wants=time-sync.target, it is best-effort only.
[1]: https://www.freedesktop.org/software/systemd/man/latest/syst...
[2]: https://www.freedesktop.org/software/systemd/man/latest/syst...
[3]: https://www.freedesktop.org/software/systemd/man/latest/syst...
It is possible for a specification to be so abstract that it's useless.
well, systemd's got them beat there!
… and the warrant canary we publish every Monday morning.
Ubuntu boxes: usually ok as long as you stay away from anything python related in the core system.
Doubtful the motivation was /etc/rc being too slow
daemontools, runit, s6 solve that problem
IIRC before PulseAudio we had to mess around with ALSA directly (memory hazy, it was a while ago). It could be a bit of a pain.
ALSA was kind of OK after mixing was enabled by default and if you didn't need to switch outputs of a running application between anything but internal speakers and headphones (which worked basically in hardware). With any additional devices that you could add and remove, ALSA became a more serious limitation, depending. You could usually choose your audio devices (including microphones) at least at the beginning of a video conference / playing a movie etc, but it was janky (unreliable, list of 20 devices for one multi-channel sound card) and needed explicit support from all applications. Not sure if it ever worked with Bluetooth.
It does, with the help of BlueALSA[0].
[0] https://github.com/arkq/bluez-alsa
I get ALSA followed the Unix philosophy of doing one thing but I want my audio mixer to play multiple sounds at once.
I am not seeing these kind of systemd issues with Fedora / RHEL.
It just works
I believe that you are genuinely being sincere here, thinking this is good advice.
But this is an absolutely terrible philosophy. This statement is ignorant as well as inconsiderate. (again, I do believbe you don't intend to be inconsiderate consciously, that is just the result.)
It's ignorant of history and inconsiderate of everyone else but yourself.
Go back a few years and this same logic says "The trick is, just use Windows and do whatever it wants and don't fight."
So why in the world are you even using Linux at all in the first place with that attitude? For dishonest reasons (when unpacked to show the double standard).
Since you are using Linux instead of Windows, then you actually are fine with fighting the tide. You want the particular bits of control you want, and as long as you are lucky enough to get whatever you happen to care about without fighting too much, then you have no sympathy for anyone else who cares aboiut anything else.
You don't see yourself as fighting any tides because you are benefitting from being able to use a mainstream distro without customizing it. But the only reason you get to enjoy any such thing at all in the first place is because a lot of other people before you fought the tide to bring some mainstream distros into existence, and actually use them for ordinary activities enough despite all the difficulties, to force at least some companies and government agencies to acknowledge them. So now you can say things like "just use a mainstream distro as it comes and don't try to do what you actually want".
This is basically exactly what I saw people saying in Windows subreddits. There's one post that particularly sticks out in my memory[0] that basically had everybody telling the OP to just not make any of the changes that they wanted to make. The advice seemed to revolve around adapting to the OS rather than adapting the OS to you, and it made me sad at the time.
[0] https://www.reddit.com/r/Windows10/comments/hehrqe/what_are_...
Incorrect. I used mainstream distro, still had issues, that just solved itself moving to pipewire. Issues like it literally crashing or emitting spur of max volume noise once every few months for no discernable reason.
Pulseaudio also completely denies existence of people trying to do music on Linux, there is no real way to make latency on it be good.
> SystemD is very opinionated, so you'd assume it wouldn't have the same results, but it does.. if you use a popular distro then they've done a lot of the hard work that makes systemd function smooth.
Over the years of using the "opinion" of SystemD seems to be "if it is not problem on Lennart's laptop, it's not a real problem and it can be closed or ignored completely".
For example systemd have no real method to tell it "turn off all apps after 5 minutes regardless of what silly package maintainers think". Now what happens if you have a server on UPS that have say 5 minutes of battery and one of the apps have some problem and doesn't want to close?
In SysV, it gets killed, and system gets remounted read only. You have app crash recovery but at least your filesystem is clean In systemd ? No option to do that. You can set default timeout but it can be override in each service so you'd have to audit every single package and tune it to achieve that. That was one bug that was closed.
Same problem also surfaced if you have say app with a bug that prevented it from closing from sigterm and you wanted to reboot a machine. Completely stuck
But wait, there is another method, systemd have an override, you can press (IIRC) ctrl+alt+delete 7 times within 2 seconds to force it to restart ( which already confuses some people that expect it to just restart machine clean(ish) regardless https://github.com/systemd/systemd/issues/11285 ).
...which is also impossible if your only method of access is software KVM where you need to navigate to menu to send ctrl+alt+del. So I made ticket with proposal to just make it configurable timeout for the CAD ( https://github.com/systemd/systemd/issues/29616 ), the ticket wasn't even read completely because Mr. Poettering said "this is not actionable, give a proposal", so I pasted the thing he decided to ignore in original ticket, and got ignored. Not even "pull requests welcome" (which I'd be fine with, I just wanted confirmation that the feature like that won't be rejected if I start writing it).
There is also issue of journald disk format being utter piece of garbage ("go thru entire journal just to get app's last few lines bad", hundreds of disk reads on simple systemctl status <appname> bad) that is consistently ignored thru many tickets from different people.
Or the issue that resolvconf replacement in systemd will just roll a dice on DNS ordering, but hey, Mr. Lennart doesn't use openvpn so it's not real issue ( https://github.com/systemd/systemd/issues/27543 )
I'm not writing it to shit on systemd and praise what was before, as a piece of software it's very useful for my job as sysadmin (we literally took tens of thousands lines of fixed init scripts out because all of the features could be achieved in unit files) and I mean "saved tons of time and few demons running" in some cases, but Mr. Poettering is showing same ignorant "I know better" attitude he got scolded at by kernel maintainers.
I don't care much about PA at this point tbh and don't know much about the inner workings; it always worked just fine for me. But from what I read from people more "in the know" at the time, I'd heard that a lot of the (very real) user-facing problems with PA were ultimately caused by driver and other low-level problems. Those were hacky, had poor assumptions, etc. PA ultimately exposed those failures, and largely got better over time because those problems got fixed upstream of PA.
My takeaway from what I read was basically that PA had to stumble and walk so that pipewire could run.
> For example systemd have no real method to tell it "turn off all apps after 5 minutes regardless of what silly package maintainers think". Now what happens if you have a server on UPS that have say 5 minutes of battery and one of the apps have some problem and doesn't want to close?
Add a TimeoutStopSec= to /etc/systemd/system/service.d/my-killing-dropin.conf more or less, I think? These are documented in the systemd.service and systemd.unit manpages respectively.
> Same problem also surfaced if you have say app with a bug that prevented it from closing from sigterm and you wanted to reboot a machine. Completely stuck
See the --force option on the halt, poweroff, and reboot subcommands of systemctl. The kill subcommand if you want to target that specific service.
> so I pasted the thing he decided to ignore in original ticket, and got ignored. Not even "pull requests welcome" (which I'd be fine with, I just wanted confirmation that the feature like that won't be rejected if I start writing it).
I'm certainly sympathetic to this pain point. I'd take Lennart at his word that he's not opposed. Generally speaking, from following the systemd project somewhat, it's a very busy project and it's hard for all issues to get serviced. But they're very open to PRs, generally speaking.
> Or the issue that resolvconf replacement in systemd will just roll a dice on DNS ordering, but hey, Mr. Lennart doesn't use openvpn so it's not real issue ( https://github.com/systemd/systemd/issues/27543 )
Quickly taking a peek here (and speaking as a relatively superficial user of resolved myself), isn't the proposed solution to define interface ordering?
> it will ask on all links in parallel if there's no better routing info available. In your case there is none (i.e. no ~. listed among your network interfaces), hence it will be asked on all interfaces at the same time.
Audio server for linux: Great idea! Pulseaudio: Genuinely a terrible implementation of it, Pipewire is a drop in replacement that actually works.
Launchd but for Linux: Great idea! SystemD: generally works now at least, but packed with insane defaults and every time this is brought up with the devs they say its the distro packagers jobs to wipe SystemD's ass and clean up the mess before users see it.
Security bug in SystemD when the user has a digit in their username: Lennart closes the bug and says that SystemD is perfect, the distros erred by permitting such usernames. Insane ego-driven response.
I am not going to struggle with systemd if I have to build containers specifically for it. If I have to rearrange everything I am doing I would just learn to do it on a minimal Kubernetes install instead.
In the great scheme of things, this period where systemd was intentionally designed and developed and funded to hurt your autonomy but seemed temporarily innocuous will be a rounding error.
What, this part is only needed for secure boot? I'm not sec... oh. So go back to the UEFI settings, turn secure boot off, problem solved. I usually also turn off SELinux right after install.
So I'm an old greybeard who likes to have full control. Less secure. But at least I get the choice. Hopefully I continue to do so. The notion of not being able to access online banking services or other things that require account login, without running on a "fully attested" system does worry me.
Currently SB is effectively useless because it will at best authenticate your kernel but the initrd and subsequent userspace (including programs that run as root) are unverified and can be replaced by malicious alternatives.
Secure Boot as it stands right now in the Linux world is effectively an annoyance that’s only there as a shortcut to get distros to boot on systems that trust Microsoft’s keys but otherwise offer no actual security.
It however doesn’t have to be this way, and I welcome efforts to make Linux just as secure as proprietary OSes who actually have full code signature verification all the way down to userspace.
sign grub with your own keys (some motherboards let you to do so). don't let random things signed by microsoft to boot (it defeats the whole point)
so you have grub in an efi partition, it passes secure boot, loads, and attempts to unlock a luks partition with the user provided passphrase. if it passed secure boot it should increase confidence that you are typing you password into the legit thing
so anyway, after unlocking luks, it locates the kernel and initrd inside it, and boots
https://wiki.archlinux.org/title/GRUB#Encrypted_/boot
the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
however, this is still better than, at failure, letting anything run
sophisticated attackers will defeat this, but they can also add a variety of attacks at hardware level
Chaining trust from POST to login feels like trying to make a theoretically perfect diamond and titanium bicycle that never wears down or falls apart when all I need is an automated system to tell me when to replace a part that’s about to fail.
You can have both full disk encryption AND a tamper protection!
(1) Encryption: fast and fantastic, and a must-have for at-rest data protection.
It is vulnerable to password theft though. An attacker might insert evil code between power-on and disk-password-entry. With a locked down BIOS / UEFI, the only way to insert the code is to take the boot drive out of the device, modify it, put it back, and hope no one notices. “Noticing” in this case is done by either:
(2) Trust chaining: verify the signatures of the entire boot process to detect evil code.
(3) Tamper detection: verify the physical integrity of the device.
My point is that (1) is a given, and out of (2) or (3), I’d rather have the latter than deal with the shoddiness of the former
Reminds me of my old Chromebook Pixel I wiped chromeos from. Every time it booted I had to press Ctrl-L (iirc) to continue the boot, any other keypress would reenable secure boot and the only way I knew to recover from that was to reinstall chromeos, which would wipe my linux partition and my files with it. Needless to say, that computer taught me good backup discipline...
I think you might want to go re-read the last ~6 months of IT news in regards of "secure proprietary OSes".
But you miss a critical part - Secure Boot, as the name implies is for boot, not OS runtime. Linux I suppose considers the part after initrd load, post-boot perhaps?
I think pid-1 hash verification from the kernel is not a huge ask, as part of secure boot, and leave it to the init system to implement or not implement user-space executable/script signature enforcement. I'm sure Mr. Poettering wouldn't mind.
Yes that's the case - my argument is that Linux currently doesn't have anything standardized to do that.
Your best bet for now is to use a read-only dm-verity-protected volume as the root partition, encode its hash in the initrd, combine kernel + initrd into a UKI and sign that.
I would welcome a standardized approach.
ParticleOS[0] gives a look at how this can all fit together, in case you want to see some of it in action.
[0] https://github.com/systemd/particleos
add luks root, then it's not that bad
Then there is also `ukify` by systemd which also can create UKIs, which then can be installed with `kernel-install`, but that is a bit more work to set up than for `mkinitcpio`.
The main part is the signing, which I usually have `sbctl` handle.
Code signature verification is an interesting idea, but I'm not sure how it could be achieved. Have distro maintainers sign the code?
Have a look at Ubuntu Core 24 and later. Though it's not exactly a desktop system, but rathe oriented towards embedded/appliances. Recent Ubuntu desktop (from 25.04 IIRC) started getting the same mechanism gradually integrated in each release. Upcoming Ubuntu 26.04 is expected to support TPM backed FDE. Worth a try if you can set up a VM with a software TPM.
Keep in mind though, there's been plenty of issues with various EFI firmwares, especially on the appliances side. EFI specs are apparently treated as guidelines rather than actual specification by whoever ends up implementing the firmware.
Most of the firmwares I've used lately seem to allow adding custom secureboot keys.
You don't need to load a driver; you can just replace a binary that's going to be executed as root as part of system boot. This is something a hypothetical code signature verification would detect and prevent.
Failing kernel-level code signature enforcement, the next best step is to have a dm-verity volume as your root partition, with the dm-verity hashes in the initrd within the UKI, and that UKI being signed with secure boot.
This would theoretically allow you to recover from even root-level compromise by just rebooting the machine (assuming the secure boot signing keys weren't on said machine itself).
Attestation, the thing we're going to be spending the next forever trying to get out of phones, now in your kernel.
The Free Software movement was successful enough that by 1997 it was garnering a lot of international community support and manpower. Eric S. Raymond published CatB in response to these successes, partly with a goal of "celebrating its successes" — sendmail, gcc, perl, and Linux were all popular projects with a huge number of collaborators by this point — and partly with a goal of reframing the Free Software movement such that it effectively neuters the political basis (i.e. the four freedoms, etc.) in a company-friendly way. It's very easy to note when reading the book, how it consistently celebrates the successes of Free Software in a company friendly way, deliberately to make it appealing to companies. Often being very explicit about its goals, e.g. "Don't give your workers good bonuses, because research shows that the better a ''hacker'' the less they care about money!".
A year later, internal memos from Microsoft leaked that showed that management were indeed scared shitless about Linux, a movement that they could neither completely Embrace, Extend, and Extinguish, nor practice Fear, Uncertainty, and Doubt on, because the community that built it were too strong, and too dedicated. Management foresaw that it was only a matter until Linux was a very strong competitor — even if that's taken 20 years, they were decently accurate in their fears, and, to be honest, part of why it's taken 30 years for Linux to catch up are deliberate actions by Microsoft wrt. introducing and adopting technologies that would stymie the Free Software movement from being able to adapt.
* smartphone device integrity checks (SafetyNet / Play Integrity / Apple DeviceCheck)
* HDMI/HDCP
* streaming DRM (Widevine / FairPlay)
* Secure Boot (vendor-keyed deployments)
* printers w/ signed/chipped cartridges (consumables auth)
* proprietary file formats + network effects (office docs, messaging)
However, I agree that the risks to individuals and their freedoms stemming from these technologies outweigh the benefits in most cases.
This is not what attestation is even seeking to solve.
For another example, IntegriCloud: https://secure.integricloud.com/
I personally don't think this product matters all that much for now. These types of tech is not oppressive by itself, only when it is being demanded by an adversary. The ability of the adversary to demand it is a function of how widespread the capability is, and there aren't going to be enough Linux clients for this to start infringing on the rights of the general public just yet.
A bigger concern is all the efforts aimed at imposing integrity checks on platforms like the Web. That will eventually force users to make a choice between being denied essential services and accepting these demands.
I also think AI would substantially curtail the effect of many of these anti-user efforts. For example a bot can be programmed to automate using a secure phone and controlled from a user-controlled device, cheat in games, etc.
Great example of proving something to your own organization. Mullvad is probably the most trusted VPN provider and they do this! But this is not a power that should be exposed to regular applications, or we end up with a dystopian future of you are not allowed to use your own computer.
I wish this myth would die at this point.
Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
The banking apps still won't trust them, though.
To add a quote from Lennart himself:
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
Your system will not belong to you anymore. Just as it is with Android.
The oppressive part of this scheme is that Google's integrity check only passes for _their_ keys, which form a chain of trust through the TEE/TPM, through the bootloader and finally through the system image. Crucially, the only part banks should care about should just be the TEE and some secure storage, but Google provides an easy attestation scheme only for the entire hardware/software environment and not just the secure hardware bit that already lives in your phone and can't be phished.
It would be freaking cool if someone could turn your TPM into a Yubikey and have it be useful for you and your bank without having to verify the entire system firmware, bootloader and operating system.
Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.
UEFI secure boot on PCs, yes for the most part. A lot of mobile platforms just never supported this. It's not a myth.
Note that the comment you replied to does not even mention phones. Locked down Secure Boot on UEFI is not uncommon on mobile platforms, such as x86-64 tablets.
Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.
Remote attestation requires a great deal of trust, and I simply don't have it when it comes to this leadership team.
this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
pottering recently works for Microsoft, they want to turn linux into an appliance just like windows, no longer a general purpose os. the transition is still far from over on windows, but look at android and how the google play services dependency/choke-hold is
im sure ill get many down votes, but despite some hyperbole this is the trajectory
The plan is probably to have that as an alternative for the niche uses where that is appropriate.
This majority of this thread seems to have slid on that slippery slope, and jumped directly to the conclusion where the attestation mechanism will be mandatory on all linux machines in the world and you won't be able to run anything without. Which even if it would be a purpose for amutable as a company, it's unfeasible to do when there's such a breadth of distributions and non corpo affiliated developers out there that would need to cooperate for that to happen.
Eventually you will not be able to block ads.
Maybe you want to reread through this thread.
> Eventually you will not be able to block ads.
That's so far down the slippery slope and with so many other things that need to go wrong that I'm not worried and I'm willing to be the one to get "told you so" if it happens.
But then Linux wouldn't be where it is without the business side paying for the developers. There is no such thing as a free lunch...
Yeah. I'm pretty sure it requires a very specific psychological profile to decide to work on such a user-hostile project while post-fact rationalizing that it's "for good".
All I can say is I'm not surprised that Poettering is involved in such a user-hostile attack on free computing.
P.S: I don't care about the downvotes, you shouldn't either.
P.S: Upvoted you. I don't care about downvotes either.
It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.
I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:
- Hal Finney's transparent server
- Keylime
- System Transparency
- Project Oak
- Apple Private Cloud Compute
- Moxie's Confer.to
I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.
I'd love to meet up at FOSDEM. Email me at [email protected].
Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)
Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.
Edit: I've reached out privately by email for next steps, as you requested.
As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.
Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.
I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)
*: https://mullvad.net/en/blog/system-transparency-future
**: https://witness-network.org
The website itself is rather vague in its stated goals and mechanisms.
A concrete example of that is electronic ballots, which is a topic I often bump heads with the rest of HN about, where a hardware identity token (an electronic ID provided by the state) can be used to participate in official ballots, while both the citizen and the state can have some assurance that there was nothing interceding between them in a malicious way.
Does that make sense?
https://news.ycombinator.com/item?id=45743756
https://arstechnica.com/security/2025/09/intel-and-amd-trust...
You'll be free to run your own Linux, but don't expect it to work outside of niche uses.
I have this fond memory of that Notary in Germany who did a remote attestation of me being with him in the same room, voting on a shareholder resolution.
While I was currently traveling on the other side of the planet.
This great concept that totally will not blow up the planet has been proudly brought to you by Ze Germans.
No matter what your intentions are: It WILL be abused and it WILL blow up. Stop this and do something useful.
[While systemd had been a nightmare for years, these days its actually pretty good, especially if you disable the "oh, and it can ALSO create perfect eggs benedict and make you a virgin again while booting up the system!" part of it. So, no bad feelings here. Also, I am German. Also: Insert list of history books here.]
One good news is that maybe LP will get less involved in systemd.
See Android; or, where you no longer own your device, and if the company decides, you no longer own your data or access to it.
Yes, system data should be locked to the system with a TPM. That way your system can refuse to boot if it's been modified to steal your user secrets.
Preventing this was the reason we had free software in the first place.
Jesus.
Probably obvious from the surnames but this is the first time I've seen a EU company pop up on Hacker News that could be mistaken for a Californian company. Nice to see that ambition.
I understand systemd is controversial, that can be debated endlessly but the executive team and engineering team look very competitive. Will be interesting to see where this goes.
I am glad to see these efforts are now under an independent firm rather than being directed by Microsoft.
What is the ownership structure like? Where/who have you received funding from, and what is the plan for ongoing monetization of your work?
Would you ever sell the company to Microsoft, Google, or Amazon?
Thanks.
No matter what the founders say, the answer to this question is always yes.
I don't think you will ever get a response to that
I'm not asking for a client list, to be clear.
https://fosdem.org/2026/schedule/speaker/lennart_poettering/
What does this mean? Why would anyone want this? Can you explain this to me like I'm five years old?
[1] https://en.wikipedia.org/wiki/Confused_deputy_problem
systemd kept him away from pulseaudio and whoever is/was maintaining that after him was doing a good job of fixing it.
See: “it’s just an init system”where it’s now also a resolver, log system, etc.
I can buy good intentions, but this opens up too much possibility for not-so-good-intended consequences. Deliberate or emergent.
it's a buggy-as-hell resolver, buggy-as-hell log system, buggy-as-hell ntp client, buggy-as-hell network manager, ...
For individuals, IMO the risk mostly come from software they want to run (install script or supply chain attack). So if the end user is in control of what gets signed, I don't see much benefit. Unless you force users to use an app store...
Atomicity means you can track every change, and every change is so small that it affects only one thing and can be traced, replayed or rolled back. Like it's going from A to B and being able to return back to A (or going to B again) in a determinate manner.
See the "features" list from systemd 257/258 [0].
[0]: https://0pointer.net/blog/
- it looks like they want to build a ChromeOS without Google.
Whatever it is, I hope it doesn't go the usual path of a minimal support, optional support and then being virtually mandatory by means of tight coupling with other subsystems.
So we try to make every new feature that might be disruptive optional in systemd and opt-in. Of course we don't always succeed and there will always be differences in opinion.
Also, we're a team of people that started in open source and have done open source for most of our careers. We definitely don't intend to change that at all. Keeping systemd a healthy project will certainly always stay important for me.
Thanks for the answer. Let me ask you something close with a more blunt angle:
Considering most of the tech is already present and shipping in the current systemd, what prevents our systems to become a immutable monolith like macOS or current Android with the flick of a switch?
Or a more grave scenario: What prevents Microsoft from mandating removal of enrollment permissions for user keychains and Secure Boot toggle, hence every Linux distribution has to go through Microsoft's blessing to be bootable?
But we will never enforce using any of these features in systemd itself. It will always be up to the distro to enable and configure the system to become an immutable monolith. And I certainly don't think distributions like Fedora or Debian will ever go in that direction.
We don't really have any control over what Microsoft decides to do with Secure Boot. If they decide at one point to make Secure Boot reject any Linux distribution and hardware vendors prevent enrolling user owned keys, we're in just as much trouble as everyone else running Linux will be.
I doubt that will actually happen in practice though.
Then maybe you shouldn't be doing it?
Why are you buying hardware that Microsoft controls if you're concerned about this?
Theoretically, nothing. But it's worth pointing out that so far they have actually done the opposite. They currently mandate that hardware vendors must allow you to enroll your own keys. There was a somewhat questionable move recently where they introduced a 'more secure by default' branding in which the 3rd party CA (used e.g. go sign shim for Linux) is disabled by default, but again, they mandated there must be an easy toggle to enable it. I don't begrudge them to much for it, because there have been multiple instances of SB bypass via 3rd party signed binaries.
All of this is to say: this is not a scenario I'm worried about today. Of course this may change down the line.
But I'm losing hope with those.
Plus, it's an avoidant and reductionist take.
Note: I have nothing against BSDs, but again, this is not the answer.
Stop trying to make everyone act like you act.
Also, I know. A few of my colleagues run {open, free, dragonfly}BSD as their daily drivers for more than two decades. Also, we have BSD based systems at a couple of places.
However, as a user of almost all mainstream OSes (at the same time, for different reasons), and planning to include OpenBSD to that roster (taking care of a fleet takes time), I'd love to everyone select the correct tool for their applications and don't throw stones at people who doesn't act like them.
Please remember that we all sit in houses made of glass before throwing things to others.
Oh, also please don't make assumptions about people you don't know.
Yeah! Telling people what to do is rude!
> Anyone still using Linux on the desktop in 2026 should switch
Oh.
"Just don't use X" is in fact a very engaged and principled response. Try again.
If you were not a systemd maintainer and have started this project/company independently targeting systemd, you would have to go through the same process as everyone and I would have expected the systemd maintainers to, look at it objectively and review with healthy skepticism before accepting it. But we cannot rely on that basic checks and balances anymore and that's the most worrying part.
> that might be disruptive optional in systemd
> we don't always succeed and there will always be differences in opinion.
You (including other maintainers) are still the final arbitrator of what's disruptive. The differences of opinion in the past have mostly been settled as "deal with it" and that's the basis of current skepticism.
What problem does this solve for Linux or people who use Linux? Why is this different from me simply enabling encryption on the drive?
In my case I am talking about myself. I prefer to actually know what is running on my systems and ensure that they are as I expect them to be and not that they may have been modified unbeknownst to me.
2. Are you looking for pilot customers?
Are these some problems you've personally been dealing with?
Then we discovered snapshot.debian.org wasn't feeling well, so that was another (important) detour.
Part of me wish we had focused more on getting System Transparency in its entirety in production at Mullvad. On the other hand I certainly don't regret us creating Tillitis TKey, Sigsum, taking care of Debian Snapshot service, and several other things.
Now, six years later, systemd and other projects have gotten a long way to building several of the things we need for ST. It doesn't make sense to do double work, so I want to seize the moment and make sure we coordinate.
Device attestation fails? No streaming video or audio for you (you obvious pirate!).
Device attestation fails? No online gaming for you (you obvious cheater!).
Device attestation fails? No banking for you (you obvious fraudster!).
Device attestation fails? No internet access for you (you obvious dissident!).
Sure, there are some good uses of this, and those good uses will happen, but this sort of tech will be overwhelmingly used for bad.
As per the announcement, we’ll be building a favorite color over the next months and sharing more information as it rolls out.
I can relate to people being rather hostile to the idea of boot verification, because this is a process that is really low level and also something that we as computer experts rarely interact with more deeply. The most challenging part of installing a Linux system is always installing the boot loader, potentially setting up an UEFI partition. These are things that I don't do everyday and that I don't have deep knowledge in. And if things go wrong, then it is extra hard to fix things. Secure boot makes it even harder to understand what is going on. There is a general lack of knowledge of what is happening behind the scenes and it is really hard to learn about it. I feel that the people behind this project should really keep XKCD 2501 in mind when talking to their fellow computer experts.
I mean, in theory, the idea is great. But it WILL be misused by greedy fucks.
I can see like a 100 ways this can make computing worse for 99% people and like 1-2 scenarios where it might actually be useful.
Like if the politicians pushing for chat control/on device scanning of data come knocking again and actually go through (they can try infinitely) tech like this will really be "useful". Oops your device cannot produce a valid attestation, no internet for you.
Who is this for / what problem does it solve?
I guess security? Or maybe reproducability?
If there’s a path to profitability, great for them, and for me too; because it means it won’t be available at no charge.
A reliably attestable system has to nail the entire boot chain: BIOS/firmware, bootloader, kernel/initramfs pairs, the `init` process, and the system configuration. Flip a single bit anywhere along the process, and your equipment is now a brick.
Getting all of this right requires deep system knowledge, plus a lot of hair-pulling adjustment, assuming if you still have hair left.
I think this part of Linux has been underrated. TPM is a powerful platform that is universally available, and Linux is the perfect OS to fully utilize it. The need for trust in digital realm will only increase. Who knows, it may even integrate with cryptocurrency or even social platforms. I really wish them a good luck.
I wish you great success
In that world, authoring technology that enables this even more is either completely mad or evil. To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom, acceptance of risk. If you build software that actively intends to take this away from me to put it into the hands of economic interests and political actors then you deserve all the hate you can get.
I use Linux since the Slackware day. Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
This is not about security for the users. It's about control.
At least many in this thread are criticizing the project.
And, once again of course, it's from a private company.
Full of ex-Microsofties.
I don't know why anyone interested in hacking would cheer for this. But then maybe HN should be renamed "CN" (Corporate News) or "MN" (Microsoft News).
agreed, and now he's planning on controlling what remains of your machine cryptographically!
>We are building cryptographically verifiable integrity into Linux systems
I wonder what that means ? It could be a good thing, but I tend to think it could be a privacy nightmare depending on who controls the keys.
Somebody will use it and eventually force it if it exists and I don't think gaming especially those requiring anti-cheat is worth that risk.
If that means linux will not be able to overtake window's market share, that's ok. At-least the year of the linux memes will still be funny.
e.g. https://support.faceit.com/hc/en-us/articles/19590307650588-...
> IOMMU is a powerful hardware security feature, which is used to protect your machine from malicious software
The ring-0 anticheat IS that fucking malicious software
(London. On some of my relatives.)
But I'm sure in this case when they achieve some kind of dominant position and Microsoft offers to re-absorb them they will do the honorable thing.
You. The money quote about the current state of Linux security:
> In fact, right now, your data is probably more secure if stored on current ChromeOS, Android, Windows or MacOS devices, than it is on typical Linux distributions.
Say what you want about systemd the project but they're the only ones moving foundational Linux security forward, no one else even has the ambition to try. The hardening tools they've brought to Linux are so far ahead of everything else it's not even funny.
It's not propaganda in any sense, it's recognizing that Linux is behind the state of the art compared to Windows/macOS when it comes to preventing tampering with your OS install. It's not saying you should use Windows, it's saying we should improve the Linux boot process to be a tight security-wise as the Windows boot process along with a long explanation of how we get there.
It's only secure from evil maker attacks if it can be wiped and reinitialised at any time.
Is it possible someone will eventually build a system that doesn't allow this? Yes. Is this influenced in any way by features of Linux software? No.
the guys that copy your bitlocker keys in the clear
As said above, it's about who controls the keys. It's either building your own castle or having to live with the Ultimate TiVo.
We'll see.
I have my reservations, ideas, and what it's supposed to do, but this is not a place to make speculations and to break spirits.
I'll put my criticism out politely when it's time.
Look, I hate systemd just as much as the next guy - but how are you getting "DRM" out of this?
Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts
There are also bad forms of remote attestation (like Google's variant that helps them let banks block you if you are running an alt-os). Those suck and should be rejected.
Edit: bri3d described what I mean better here: https://news.ycombinator.com/item?id=46785123
Let's say I accept this statement.
What makes you think trusted boot == remote attestation?
No, it's not. (And for that matter, neither is remote attestation)
You're conflating the technology with the use.
I believe that you have only thought about these technologies as they pertain to DRM, now I'm here to tell you there are other valid use cases.
Or maybe your definition of "DRM" is so broad that it includes me setting up my own trusted boot chain on my own hardware? I don't really think that's a productive definition.
This company is explicitly all about implementing remote attestation (which is a form of DRM):
https://amutable.com/events
> Remote Attestation of Imutable Operating Systems built on systemd
> Lennart Poettering
Is there a HN full moon out?
Again, this is wrong.
DRM is a policy.
Remote attestation is a technology.
You can use remote attestation to implement DRM.
You can also use remote attestation to implement other things.
They literally don't.
For a decade, I worked on secure boot & attestation for a device that was both:
- firmware updatable - had zero concept or hardware that connected it to anything that could remotely be called a network
The update is predicated on a valid signature.
Would love to hear more of your thoughts on how the users of the device I worked on had their freedom restricted!
I guess my company, the user of the device that I worked on, was being harmed by my company, the creator of the device that I worked on. It's too bad that my company chose to restrict the user's freedom in this way.
Who cares if the application of the device was an industrial control scenario where errors are practically guaranteed to result in the loss of human life, and as a result are incredibly high value targets ala Stuxnet.
No, the users rights to run any code trumps everything! Commercial device or not, ever sold outside of the company or not, terrorist firmware update or not - this right shall not be infringed.
I now recognize I have committed a great sin, and hope you will forgive me.
IMO it's pretty clear that this is a server play because the only place where Linux has enough of a foothold to make client / end-user attestation financially interesting is Android, where it already exists. And to me the server play actually gives me more capabilities than I had: it lets me run my code on cloud provided machines and/or use cloud services with some level of assurance that the provider hasn't backdoored me and my systems haven't been compromised.
It's like designing new kinds of nerve gas, "quite sure" that it will only ever be in the hands of good guys who aren't going to hurt people with it. That's powerful naïveté. Once you make it, you can't control who has it and what they use it for. There's no take-backsies, that's why it should never be created in the first place.
The "bad" version, client attestation, is already implemented on Android, and could be implemented elsewhere but is only a parallel concept.
There is unmet industrial market demand for the (IMO) "not so bad / maybe even good" version, server attestation.
Interesting choice of analogy, to compare something with the singular purpose to destroy biological entities, to a computing technology that enforces what code is run.
Can you not see there might be positive, non-destructive applications of the latter? Are you the type of person that argues cars shouldn't exist due to their negative impacts while ignoring all the positives?
Why not adopt seL4 like everybody else who is not outright delusional[0][1]?
0. https://sel4.systems/Foundation/Membership/
1. https://sel4.systems/use.html
---
Making secure boot 100 times simpler would be a deffo plus.
Having said that, should this company not be successful, Mr Zbyszek Jędrzejewski-Szmek has potentially a glowing career as an artists' model. Think Rembrandt sketches.
I look forward to something like ChromeOS that you can just install on any old refurbished laptop. But I think the money is in servers.
One of the most grating pain points of the early versions of systemd was a general lack of humility, some would say rank arrogance, displayed by the project lead and his orbiters. Today systemd is in a state of "not great, not terrible" but it was (and in some circles still is) notorious for breaking peoples' linux installs, their workflows, and generally just causing a lot of headaches. The systemd project leads responded mostly with Apple-style "you're holding it wrong" sneers.
It's not immediately clear to me what exactly Amutable will be implementing, but it smells a lot like some sort of DRM, and my immediate reaction is that this is something that Big Tech wants but that users don't.
My question is this: Has Lennart's attitude changed, or can linux users expect more of the same paternalism as some new technology is pushed on us whether we like it or not?
It improves on about every level compared to what came before. And no, nothing is perfect and you sometimes have to troubleshoot it.
About ten years ago I took a three day cross-country Amtrak trip where I wanted to work on some data analysis that used mysql for its backend. It was a great venue for that sort of work because the lack of train-internet was wonderful to keep me focused. The data I was working with was about 20GB of parking ticket data. The data took a while to process over SQL which gave me the chance to check out the world unfolding outside of the train.
At some point, mysql (well, mariadb) got into a weird state after an unclean shutdown that put itself into recovery mode where upon startup it had to do some disk-intensive cleanup. Thing is -- systemd has a default setting (that's not readily documented, nor sufficiently described in its logs when the behavior happens) that halts the service startup after 30 seconds to try again. On loop.
My troubleshooting attempts were unsuccessful. And since I deleted the original csv files to save disk space, I wasn't able to even poke at the CSV files through python or whatnot.
So instead of doing the analysis I wanted to do on the train, I had to wait until I got to the end of the line to fix it. Sure enough, it was some default 30s timeout that's not explicitly mentioned nor commented out like many services do.
So, saying that things are "much better in every way" really falls on deaf ears and is reminiscent of the systemd devs' dismissive/arrogant behavior that many folk are frustrated about.
https://bugzilla.redhat.com/show_bug.cgi?id=1780979
https://github.com/systemd/systemd/commit/a083b4875e8dec5ce5...
That was far from the only time that the systemd developers decided to just break norms or do weird things because they felt like it, and then poorly communicate that change. Change itself is fine, it's how we progress. But part of that arrogance that you mentioned was always framing people who didn't like capricious or poorly communicated changes as being against progress, and that's always been the most annoying part of the whole thing.
https://linuxiac.com/systemd-tmpfiles-issue/
How can I cancel a systemd startup task that blocks the login prompt? / how is forcing me to wait for dhcp on a network interface that isn't even plugged in a better experience?
It’s not really the fault of systemd; it just enables new possibilities that were previously difficult/impossible and now the usage of said possibilities is surfacing problems.
On other inits, I can hit ctrl-C to break out of a poorly configured setup. Yes, it's more difficult when there's potentially parallelism. But systemd is not uniformly better than everything else when it lacks interactivity.
And it might not be better than everything else if common distributions set it up wrong because it's difficult to set it up right. If we're willing to discount problems related to one init system because the distribution is holding it wrong, then why don't we blame problems with other init systems on distributions or applications, too? There's no need to restart crashing applications if applications don't crash, etc.
There are serious problems with the systemd paradigm, most of which I couldn't argue for or against. But at least in Void, I can remove network-manger altogether, use cron as I always have, and generally remain free to do as I please until eventually every package there is has systemd dependencies which seems frightfully plausible at this pace.
Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
I'm glad to see the terse questions here. They're well warranted.
of course you can run Cron as well and run all your jobs twice in two different ways, but that's only pedantically possible as it's a completely useless way to do things.
systemd itself only has 2 references to "crontab" in its entire codebase and both of those are in man-pages.
My educated guess is that some other package is installing a generator to generate systemd units out of the crontab (e.g. https://github.com/systemd-cron/systemd-cron)
If systemd-less Linux ever go, there are indeed still the BSDs. But I thought long and hard about this and already did some testing: I used to run Xen back in the early hardware-virt days and nowadays I run Proxmox (still, sadly, systemd-based).
An hypervisor with a VM and GPU passthrough to the VM is at least something too: it's going to be a long long while before people who want to take our ability to control our machines will be able to prevent us from running a minimal hypervisor and then the "real" OS in a VM controlled by the hypervisor.
I did GPU passthrough tests and everything works just fine: be it Linux guests (which I use) or Windows guests (which I don't use).
My "path" to dodge the cave you're talking about is going to involved an hypervisor (atm I'm looking at the FreeBSD's bhyve hypervisor) and then a VM running systemd-less Linux.
And seen that, today, we can run just about every old system under the sun in a VM, I take we'll all be long dead before evil people manage to prevent us from running the Linux we want, the way we want.
You're not alone. And we're not alone.
I simply cannot stand the insufferable arrogance of Agent Poettering. Especially not seen the kitchen sink that systemd is (systemd ain't exactly a homerun and many are realizing that fact now).
System shutdown/reboot is now unreliable. Sometimes it will be just as quick as it was before systemd arrived, but other times, systemd will decide that something isn't to its liking, and block shutdown for somewhere between 30 seconds and 10 minutes, waiting for something that will never happen. The thing in question might be different from one session to the next, and from one systemd version to the next; I can spend hours or days tracking down the process/mount/service in question and finding a workaround, only to have systemd hang on something else the next day. It offers no manual skip option, so unless I happen to be working on a host with systemd's timeouts reconfigured to reduce this problem, I'm stuck with either forcing a power-off or having my time wasted.
Something about systemd's meddling with cgroups broke the lxc control commands a few years back. To work around the problem, I have to replace every such command I use with something like `systemd-run --quiet --user --scope --property=Delegate=yes <command>`. That's a PITA that I'm unlikely to ever remember (or want to type) so I effectively cannot manage containers interactively without helper scripts any more. It's also a new systemd dependency, so those helper scripts now also need checks for cgroup version and systemd presence, and a different code path depending on the result. Making matters worse, that systemd-run command occasionally fails even when I do everything "right". What was once simple and easy is now complex and unreliable.
At some point, Lennart unilaterally decided that all machines accessed over a network must have a domain name. Subsequently, every machine running a distro that had migrated to systemd-resolved was suddenly unable to resolve its hostname-only peers on the LAN, despite the DNS server handling them just fine. Finding the problem, figuring out the cause, and reconfiguring around it wasn't the end of the world, but it did waste more of my time. Repeating that experience once or twice more when systemd behavior changed again and again eventually drove me to a policy of ripping out systemd-resolved entirely on any new installation. (Which, of course, takes more time.) I think this behavior may have been rolled back by now, but sadly, I'll never get my time back.
There are more examples, but I'm tired of re-living them and don't really want to write a book. I hope these few are enough to convey my point:
Systemd has been a net negative in my experience. It has made my life markedly worse, without bringing anything I needed. Based on conversations, comments, and bug reports I've seen over the years, I get the impression that many others have had a similar experience, but don't bother speaking up about it any more, because they're tired of being dismissed, ignored, or shouted down, just as I am.
I would welcome a reliable, minimal, non-invasive, dependency-based init. Systemd is not it.
Also trying to use systemd with podman is frustrating as hell. You just cannot run a system service using podman as a non-root user and have it work correctly.
My understanding is quadlet does not solve this, and my options are calling "systemctl --user" or "--userns auto". I would love to be wrong here.
Err... You just need to run `podman-compose systemd`?
I have my entire self-hosted stack running with systemd-controlled Podman, in regular user accounts.
You're free to root your phone. You're free to run whatever you want. You're just not entitled to have third parties trust that device with their systems and money. Same as you're free to decline STD testing - you just don't get to then demand unprotected sex from partners who require it.
While the bank use case makes a compelling argument, device attestation won't be used for just banks. It's going to be every god damned thing on the internet. Why? Because why the hell not, it further pushes the costs of doing business of banks/MSPs/email providers/cloud services onto the customer and assigns more of the liabilities.
It will also further the digital divide as there will be zero support for devices that fail attestation at any service requiring it. I used to think that the friction against this technology was overblown, but over the last eighteen months I've come to the conclusion that it is going to be a horrible privacy sucking nightmare wrapped in the gold foil of security.
I've been involved in tech a long, long time. The first thing I'm going to do when I retire is start chucking devices. I'm checking-out, none of this is proving to be worth the financial and privacy costs.
This is not a persuasive argument.
You are also ignoring the fact that YOU can use remote attestation to verify remote computers are running what they say they are.
"I've been involved in tech a long, long time. The first thing I'm going to do when I retire is start chucking devices. I'm checking-out, none of this is proving to be worth the financial and privacy costs."
You actually sound like you are having a nervous breakdown. Perhaps you should take a vacation.
But its a bank, right? Its my money.
If there is real effective market (which is not in any country on Earth, especially for banks), you could say: vote with you money, choose bank which suits you. But it is impossible even with bakery, less with banks on market which is strictly regulated (in part as result of lobbying by established institutions, to protect themselves!).
So, on one hand, I must use banks (I cannot pay for many things in cash, here, where I live most of bars and many shops doesn't accept cash, for example, and it is result of government politics and regulations), and on other hand banks is not seen as essential as access to air and water, they could dictate any terms they want.
I see this situation completely screwed.
And we are discussing this movement here. You know, пive him an inch and he'll take a yard.
"Those who give up freedom for security deserve neither."
avoiding backdoors as a private person you always can only solve with having the hardware at your place, because hardware ALWAYS can have backdoors, because hardware vendors do not fix their shit.
From my point of view it ONLY gives control and possibilities to large organizations like governments and companies. which in turn use it to control citizens
So, some of the people doing "typical HN rage-posting about DRM" are also absolutely right.
The capabilities locking down macOS and iOS and related hardware also can be used for good, but they are not used for that.
What do you mean by this?
Is the concern that systemd is suddenly going to require that users enable some kind of attestation functionality? That making attestation possible or easier is going to cause third parties to start requiring it for client machines running Linux? This doesn't even really seem to be a goal; there's not really money to be made there.
As far as I can tell the sales pitch here is literally "we make it so you can assure the machines running in your datacenter are doing what they say they are," which seems pretty nice to me, and the perversions of this to erode user rights are either just as likely as they ever were or incredibly strange edge cases.
So, every PC sold to consumers is sanctioned by Microsoft. This list contains Secure Boot and TPM based requirements, too.
If Microsoft decides to eliminate enrollment of user keys and Secure Boot toggle, they can revoke current signing keys for "shims" and force Linux distributions to go full immutable to "sign" their bootloaders so they can boot. As said above, it's not something Amutable can control, but enable by proxy and by accident.
Look, I work in a datacenter, with a sizeable fleet. Being able to verify that fleet is desirable for some kinds of operations, I understand that. On the other hand, like every double edged sword, this can cut in both ways.
I just want to highlight that, that's all.
Now we have immutable distributions (SuSE, Fedora, NixOS). We have the infrastructure for attestation (systemd's UKI, image based boot, and other immutability features), TPMs and controversially uutils (Which is MIT licensed and has the stated goal to replace all GNU userspace).
You can build an immutable and adversarial userspace where you don't have to share the source, and require every boot and application call to attest. The theoretical thickness of the wall is both much greater and this theoretical state is much easier to achieve.
20 years ago the only barrier was booting. After that everything was free. Now it's possible to boot into a prison where your every ls and cd command can be attested.
Oh, Rust is memory safe. Good luck finding holes.
What? As just one example, dm-verity was merged into the mainline kernel 13 years ago. I built immutable, verified Linux systems at least ten years ago, and it was considered old hat by the time I got there.
> The best you could do was, TiVoization, but that would be too obvious and won't fly.
What does this even mean? "TiVoization" is the slang for "you get a device that runs Linux, you get the GPL sources, but you can't flash your own image on the device because you don't own the keys." This is the exact same problem then as it was now and just as "obvious?"
I understand the fears that come from client attestation (certainly, the way it has been used on Android has been majorly detrimental to non-Google ROMs), but, to the Android point, the groundwork has always been there.
I'd be very annoyed if someone showed up and said "we're making a Linux-based browser attestation system that your bank is going to partner on," but nobody has even gone this direction on Windows yet.
> Oh, Rust is memory safe. Good luck finding holes.
I break secure boot systems for a living and I'd say _maybe_ half of the bugs I find relate to memory safety in a way Rust would fix. A lot of systems already use tools which provide very similar safety guarantees to Rust for single threaded code. Systems are definitely getting more secure and I do worry about impenetrable fortresses appearing in the near future, but making this argument kind of undermines credibility in this space IMO.
I do think this approach might get used for client attestation in gaming, which I actually don’t mind; renting/non-owning a client that lets me play against non cheaters is a pretty good gaming outcome, and needing a secure configuration to run games seems like a fine trade for me (vs for example needing a secure desktop configuration to access my bank, which would be irksome).
and each time the doors have been blasted wide off by huge security vulnerabilities
the attack surface is simply too large when people can execute their own code nearby
You will get 10000 zero days before you get a single direct attack at hardware
1. How will the company make money? (You have probably been asked that a million times :).)
2. Similar to the sibling: what are the first bits that you are going to work on.
At any rate, super cool and very nice that you are based in EU/Germany/Berlin!
2. Given the team, it should be quite obvious there will be a Linux-based OS involved.
Our aims are global but we certainly look forward to playing an important role in the European tech landscape.
I take it that you are not at this stage able to provide details of the nature of the path to revenue. On what kind of timescale do you envisage being able to disclose your revenue stream/subscribers/investors?
As I understand it, the main customers for this sort of thing are companies making Tivo-style products - where they want to use Linux in their product, but they want to lock it down so it can't be modified by the device owner.
This can be pretty profitable; once your customers have rolled out a fleet of hardware locked down to only run kernels you've signed.
[1] https://ubuntu.com/core
That is why Ubuntu Core (and similar) exist. More secure, better update strategy, lower net cost. I don't agree with the licensing or pricing model, but there are perfectly good technical reasons to use it.
A "robust path to revenue" plus a Linux-based OS and a strong emphasis on EU / German positioning immediately triggers some concern. We've seen this pattern before: wrap a commercially motivated control layer in the language of sovereignty, security, or European tech independence, and hope that policymakers, enterprises, and users don't look too closely at the tradeoffs.
Europe absolutely needs stronger participation in foundational tech, but that shouldn't mean recreating the same centralized trust and control models that already failed elsewhere, just with an EU flag on top. 'European sovereignty' is not inherently better if it still results in third-party gatekeepers deciding what hardware, kernels, or systems are "trusted."
Given Europe's history with regulation-heavy, vendor-driven solutions, it's fair to ask:
Who ultimately controls the trust roots?
Who decides policy when commercial or political pressure appears?
What happens when user interests diverge from business or state interests?
Linux succeeded precisely because it avoided these dynamics. Attestation mechanisms that are tightly coupled to revenue models and geopolitical branding risk undermining that success, regardless of whether the company is based in Silicon Valley or Berlin.
Hopefully this is genuinely about user-verifiable security and not another marketing-driven attempt to position control as sovereignty. Healthy skepticism seems warranted until the governance and trust model are made very explicit.
I want to know if they raised VC money or not.
Either way at least it isn't anything about AI and has something to do with hard cryptography.
The systemd crowd are perhaps worse than GNOME, as regards "my way or the highway", and designing systems that are fundamentally inadequate for the general use-case. I don't think ethnicity or gender diversity quotas would substantially improve their decision-making: all it would really achieve is to make it harder to spot the homogeneity in a photograph. A truly diverse team wouldn't make the decisions they make.
https://news.ycombinator.com/newsguidelines.html
Why should we trust microsofties to produce something secure and non-backdoored?
And, lastly, why should Linux's security be tied to a private company? Oooh, but it's of course not about security: it's about things like DRM.
I hope Linus doesn't get blinded here: systemd managed to get PID 1 on many distros but they thankfully didn't manage, yet, to control the kernel. I hope this project ain't the final straw to finally meddle into the kernel.
Currently I'm doing:
But Promox is Debian based and Debian really drank too much of the systemd koolaid.So my plan is:
And then I'll be, at long last, systemd-free again.This project is an attack on general-purpose computing.
Imagine you're using a program hosted on some cloud service S. You send packets over the network; gears churn; you get some results back. What are the problems with such a service? You have no idea what S is doing with your data. You incur latency, transmission time, and complexity costs using S remotely. You pay, one way or another, for the infrastructure running S. You can't use S offline.
Now imagine instead of S running on somebody else's computer over a network, you run S on your computer instead. Now, you can interact with S with zero latency, don't have to pay for S's infrastructure, and you can supervise S's interaction with the outside world.
But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
However --- if S's author could run S on your computer in such a way that he could prove you haven't tampered with S or haven't observed its secrets, he can let you run S on your computer without giving up control over S. Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
Sure, in this deployment model, just like in the cloud world, you wouldn't be able to run a custom S: but so what? You don't get to run your custom S either way, and this way, relative to cloud deployment, you get better performance and even a little bit more control.
Also, the same thing works in reverse. You get to run your code remotely in a such a way that you can trust its remote execution just as much as you can trust that code executing on your own machine. There are tons of applications for this capability that we're not even imagining because, since the dawn of time, we've equated locality with trust and can now, in principle, decouple the two.
Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
But it won't be used like that. It will be used to take user freedoms out.
> But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
That use case you're describing is already there and is currently being done with DRM, either in browser or in app itself.
You are right in the "it will make easier for app user to do it", and in theory it is still better option in video games than kernel anti-cheat. But it is still limiting user freedoms.
> Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
Majority of the uses will be user-hostile things. Because those are only cases where someone will decide to fund it.
To be honest, mainly companies need that. personal users do not need that. And additionally companies are NOT restrained by governments not to exploit customers as much as possible.
So... i also see it as enslaving users. And tell me, for many private persons, where does this actually give them for PRIVATE persons, NOT companies a net benefit?
> This potential shouldn't prevent our inventing new kinds of tool.
Why do i see someone who wants to build an atomic bomb for shit and giggles using this argument, too? As hyperbole as my argument is, the argument given is not good here, as well.
The immutable linux people build tools, without building good tools which actually make it easier for private people at home to adapt a immutable linux to THEIR liking.
In my personal philosophy, it is never bad to develop a new technology.
"Trust us" is never a good idea with profit seeking founders. Especially ones who come from a culture that generally hates the hacker spirit and general computing.
You basically wrote a whole narrative of things that could be. But the team is not even willing to make promises as big as yours. Their answers were essentially just "trust us we're cool guys" and "don't worry, money will work out" wrapped in average PR speak.
Not just can. They will use it.
https://news.ycombinator.com/item?id=18321884
- Linux is better now
- Nothing bad