Even just on a theoretical level I am not really sure the use case of this system. For most keys like ssl certs, this is just too impractical. For anything that has significant business value (like the iOS signing key), I don't think any business would give up all control of such a key to the whims of 3 out of 5 people.
...and an incredibly handwavy shallow explanation of why this actually works ("Through a clever sequence of oblivious transfers and what’s called multiplicative-to-additive share conversion, they each compute a partial signature.")
I don't get it. If you want a blog, write a blog. If you don't want a blog, don't write a blog. But why use an LLM to create a slopblog? It just wastes EVERYONE's time and energy. How disappointing.
Not sure if it's AI slop yet, but I also found the core part (the "oblivious transfers") to be explained too handwavy to really understand the properties of this system. I don't want to know all the mathematical details, but I do want to understand who is exchanging what data with whom. "oblivious transfer" doesn't tell me anything here.
The other (maybe more interesting) question is how this tech would be deployed. So ok, we have a system, where something can only be signed/decrypted/encrypted/etc if several parties are in agreement. Who should the parties be? How is the threshold itself actually managed?
OP also seems to drift between different usage scenarios here:
- some sort of collectively owned good (like the DAO or resources in a cooperative?) - seems straightforward on a technical level (every owner has a partial key) but also a niche usecase and quite inflexible: What happens if an owner drops out or you want to introduce a new one? What happens if you want to change the quorum?
- traditional authentication of individual users against a server, in a federated setup like the fediverse: Seems like the most practical usecase: One party is the user, the other is the server, the verifying party would be other servers of the network. But then you have to pick your poison by how you set the quorum: Either the quorum is "any party can decrypt the data", at which point you're not better than normal password auth; or "both parties are needed", which would protect against the user or the server accidentally leaking the key - but then you're back to "single point of failure" if any party accidentally loses the key.
- the last scenario would be server-side keys that could cause massive problems if they leaked. But I don't understand at all who should be the other parties here. Also how would this be better than HSMs?
Yeah, AI blogs are close to worthless. It’s a circular feed of slop for LLM’s to be trained on. If I can just talk to the LLM to get the same content, I don’t want to be directly reading it.
What I want to read is well-researched and deeply considered pieces that do a good job explaining concepts in a fresh way and help me learn something new. Sure, use AI to help get there, but if you haven’t done much research or haven’t thought about it yourself beyond the prompts… I don’t want to read it
The article does touch on HSMs but might be missing the point of them?
> A compromised server no longer means a compromised key
Proper use of an HSM means that even the owner of the private key is not allowed to access it. You sign your messages within the secure context of the HSM. The key never leaves. It cannot become compromised if the system is configured correctly.
> Enter X
> How It Works (Without the PhD)
> Why Y Should Care
...and an incredibly handwavy shallow explanation of why this actually works ("Through a clever sequence of oblivious transfers and what’s called multiplicative-to-additive share conversion, they each compute a partial signature.")
I don't get it. If you want a blog, write a blog. If you don't want a blog, don't write a blog. But why use an LLM to create a slopblog? It just wastes EVERYONE's time and energy. How disappointing.
The other (maybe more interesting) question is how this tech would be deployed. So ok, we have a system, where something can only be signed/decrypted/encrypted/etc if several parties are in agreement. Who should the parties be? How is the threshold itself actually managed?
OP also seems to drift between different usage scenarios here:
- some sort of collectively owned good (like the DAO or resources in a cooperative?) - seems straightforward on a technical level (every owner has a partial key) but also a niche usecase and quite inflexible: What happens if an owner drops out or you want to introduce a new one? What happens if you want to change the quorum?
- traditional authentication of individual users against a server, in a federated setup like the fediverse: Seems like the most practical usecase: One party is the user, the other is the server, the verifying party would be other servers of the network. But then you have to pick your poison by how you set the quorum: Either the quorum is "any party can decrypt the data", at which point you're not better than normal password auth; or "both parties are needed", which would protect against the user or the server accidentally leaking the key - but then you're back to "single point of failure" if any party accidentally loses the key.
- the last scenario would be server-side keys that could cause massive problems if they leaked. But I don't understand at all who should be the other parties here. Also how would this be better than HSMs?
What I want to read is well-researched and deeply considered pieces that do a good job explaining concepts in a fresh way and help me learn something new. Sure, use AI to help get there, but if you haven’t done much research or haven’t thought about it yourself beyond the prompts… I don’t want to read it
> A compromised server no longer means a compromised key
Proper use of an HSM means that even the owner of the private key is not allowed to access it. You sign your messages within the secure context of the HSM. The key never leaves. It cannot become compromised if the system is configured correctly.
Since I've got control of the box I can now use it to sign any app. Isn't that bad enough?