26 comments

  • throw0101c 3 days ago
    Another option:

    SSH certificates have been around for a while now, so you can create an in-house SSH CA, so that they are short-lived (compared to on-laptop keys) and you have to authenticate to get a fresh one.

    To automate getting SSH certs there are a number of options, including the step-ca project, which can talk to OAUTH/OIDC systems (Google, Okta, Microsoft Entra ID, Keycloak):

    * https://smallstep.com/docs/step-ca/provisioners/#oauthoidc-s...

    as well as cloud providers:

    * https://smallstep.com/docs/step-ca/provisioners/#cloud-provi...

    There are commercial offerings as well:

    * https://www.google.com/search?q=centrally+managed+ssh+certif...

    • EthanHeilman 3 days ago
      Step-ca is really cool and has a lot of templating and policy stuff opkssh doesn't currently have. However step-ca does require two trusted parties: your IDP and the SSH CA.

      The advantage of opkssh is that there is only one trusted party, your IDP.

      While not available in opkssh yet, OpenPubkey even has a way of removing the trust assumption in your IDP.

      I wonder if step-ca would ever consider using opkssh or the OpenPubkey protocol

      • Jnr 2 days ago
        Step CA has some open source parts but it is built in a way where it is inconvenient to use without their admin infra. I also had to do some modifications to their server to make it work properly. If OPKSSH doesn't need all that crap then I am all for it. I'll certainly give it a shot.
      • blueflow 2 days ago
        Theoretically, is the distinction between IDP and CA necessary? I kinda would expect the IDP to certify my pubkey.
        • EthanHeilman 2 days ago
          Currently IDPs don't care about user public keys. OpenPubkey manages to slip the user's public key into an issued ID Token without the IDP having to know about it.

          Ideally IDPs are CAs for identity and ID Tokens have a public key field.

          There are neat projects and standards to do this like OIDC-squared [0] and OIDC4VC [1] but it is unclear if IDPs will implement them if they are standardized. We do have DPoP now [2] but it isn't available for any of the usecases that are important to me. OpenPubkey is largely an productive expression of my frustration with public keys in tokens being a promised feature that never arrives.

          [0]: OIDC-squared https://jonasprimbs.github.io/oidc-squared [1]: OIDC4VC https://identity.foundation/jwt-vc-presentation-profile/ [2]: RFC-9449 OAuth 2.0 Demonstrating Proof of Possession (DPoP) - https://datatracker.ietf.org/doc/html/rfc9449

  • zokier 3 days ago
    I don't love this.

    > Unfortunately, while ID Tokens do include identity claims like name, organization, and email address, they do not include the user’s public key. This prevents them from being used to directly secure protocols like SSH

    This seems like dubious statement. SSH authentication does not need to be key based.

    I understand the practicality of their approach, but I would have preferred this to be proper first-class authentication method instead of smuggling it through publickey auth method. SSH protocol is explicitly designed to support many different auth methods, so this does feel like a missed opportunity. I don't know openssh internals, but could this have been implemented through gssapi? That's the traditional route for ssh sso. If not gssapi, then something similar to it.

    https://datatracker.ietf.org/doc/html/rfc4462

    • EthanHeilman 3 days ago
      > This seems like dubious statement. SSH authentication does not need to be key based.

      Let's say you just use an ID Token as a bearer token to authenticate to SSH. The SSH server now has the secret you used to authenticate with. Doesn't this introduce replay attacks where the SSH server can replay your ID Token to log into other SSH servers?

      Whereas if your ID Token functions like a "certificate" issued by your IDP binding your identity to a public key, it is no longer a secret. You can just use your public key to prove you are you. No secrets leave your computer.

      My motto: always use public key rather than a bearer secret if possible.

      > I understand the practicality of their approach, but I would have preferred this to be proper first-class authentication method instead of smuggling it through publickey auth method

      Me too. I have a PR open to SSH3 (not connected with OpenSSH) so it can be support OpenPubkey as a built-in authentication mechanism.

      https://github.com/francoismichel/ssh3/pull/146

      • jrozner 3 days ago
        I think it's interesting they're choosing to use certificates this way. If they're already using certs, why not just leverage sshca auth? Also, at the end of the day, it's still effectively a bearer token. I founded a company called Based Security last year in this space. We're looking for design partners currently. We host a CA for you (or you can host yourself if you want) and use ssh certificates and bind the user identity (oidc to the IdP) to a physical device (yubikey, secure enclave, tpm, etc.) This ensures that the user is both in possession of the physical device and that the credential can't be stolen without stealing the device, unlike the bearer token examples here. Currently we're offering support for GitHub and GitLab authentication but it works out of the box with standard ssh tooling as well. It just currently requires manually handling user provisioning for standard ssh access.
        • EthanHeilman 3 days ago
          > Why not just leverage sshca auth?

          Because that has two trusted parties: the IDP and the SSH CA. OPKSSH has just one trusted party: the IDP.

          > This ensures that the user is both in possession of the physical device and that the credential can't be stolen without stealing the device, unlike the bearer token examples here. Currently we're offering support for GitHub and GitLab authentication but it works out of the box with standard ssh tooling as well. It just currently requires manually handling user provisioning for standard ssh access.

          That sounds valuable.

          Have you looked in OpenPubkey, the cosigner protocol supports binding hardware tokens to ID Tokens? Although not as fancy as having the SSH key pair live in the hardware token but maybe we could figure out a way to get the best of both worlds.

          • jrozner 3 days ago
            I can understand the concern about having a second trusted party but think that the value of utilizing the standard ssh ca auth flow is worth the potential risk. If you require keys in attested hardware and verify that before issuing certs, the actual attack becomes very difficult. You need to compromise the actual hardware or compromise the CA in a pretty substantial way to issue certs to untrusted private keys. The certificate alone doesn't actually do anything without the key. In addition to just being supported out of the box, we can also issue hardware bound host keys, which allow us to offer bi-directional verification. We gain the benefit of all the standard PKI tooling (eg. revocation lists, ACME, etc.) and can use the same PKI for other scenarios (eg. mTLS, piv, etc.) by issuing x509 certificates instead. That's our long term plan is moving past ssh auth and having it be an attestable, immovable, hardware backed identity that can be usable for continuous authentication in other areas.

            I have looked into OpenPubKey briefly in the past but haven't spent a ton of time with it. We were going in a very different direction and it didn't seem particularly useful based on our goals or what we wanted to achieve.

            edit: Looking at the documentation https://docs.bastionzero.com/openpubkey-ssh/openpubkey-ssh/i... It seems like to use OpenPubKey you also need a fairly modern version of OpenSSH. It also requires that the user authenticating have sudo access on the machine, which doesn't sound great. It's not clear to me whether it's possible for the existing authorized_keys file to co-exist or whether that's just to stop access using existing keys but using the standard ssh certs will co-exist allowing for a non-binary rollout if there are use cases that need to be worked around.

            • EthanHeilman 3 days ago
              That documentation refers to a much older and closed source version of opkssh.

              > It seems like to use OpenPubKey you also need a fairly modern version of OpenSSH.

              On versions of OpenSSH older than 8.1 (2019), you may run into issues if you have a huge ID Token. That shouldn't be a problem for standard sized ID Tokens, some enterprise OIDC solutions put the phone book in an ID Token and we have to care about that.

              > It also requires that the user authenticating have sudo access on the machine, which doesn't sound great.

              The user authenticating does not need sudo access. You only need sudo access to install it. You need sudo to install most software in on servers.

              > It's not clear to me whether it's possible for the existing authorized_keys file to co-exist or whether that's just to stop access using existing keys

              opkssh works just fine in parallel to authorized_keys. We are using AuthorizedKeyCommand config option in sshd_config, so opkssh functions like an additional authorized_keys file. My recommendation is that you use authorized_keys as a breakglass mechanism.

      • slt2021 3 days ago
        > The SSH server now has the secret you used to authenticate with.

        secrets can be made unique per connection and single use

        • confiq 3 days ago
          this ^

          GSSAPI can be more secured than public/private key if configured right.

          • EthanHeilman 3 days ago
            Can you explain more? I want to be a fan of GSSAPI
            • thyristan 2 days ago
              Don't know what the grandparent meant by GSSAPI, that is just an API for various underlying auth methods. But what people usually use together with GSSAPI is Kerberos.

              Kerberos can be very secure, much more so than CA-based or generally asymmetric crypto based approaches. Kerberos (if you ignore some extensions) uses symmetric cryptography, so it is less vulnerable to quantum computers. Use AES256 and you are fine, a quantum attacker can at the most degrade this to a 128bit level (according to current theories). Also, no weak RSA exponents, elliptic curve points at zero, variable-time implementations or other common pitfalls of asymmetric crypto. The trusted third party ("KDC" in Kerberos) distributes keys ("tickets") for pairs of user and service-on-server, so mutual authentication is always assured, not like in ssh or https where you can just ignore that pesky certificate or hostkey error. Keys are short-lived (hours to weeks usually), but there are builtin mechanisms for autorenew if desired. Each side of a key can also be bound to a host identity, so stealing tickets (like cookies in HTTP) can be made harder (but not impossible). The KDC can (theoretically, rarely implemented) also enforce authorisation by preventing users from obtaining tickets for services they are not authorized to use (the original idea of Kerberos was to just use it for authentication, the authorisation step is done by the service after authentication has been established).

              Single-Sign-On with all the usual protocols is included automatically, you log in to your workstation and get a ticket-granting-ticket that can then be used to transparently get all the subsequent tickets for services you are using. The hardest one to implement is actually HTTP, because browsers suck and just implement the bare minimum.

              However, the whole of Kerberos implementations is ancient, 1990s era software, packed with extensions upon extensions. You don't really want to expose a KDC to the open internet nowadays. The whole thing needs a redesign and rewrite in something safer than 1990s era C.

              Oh, and there are mechanisms for federation like trust relationsships between realms and KDCs, but nobody uses those beyond merging internal corporate networks.

              • slt2021 2 days ago
                the weak point of Kerberos is not the Kerberos protocol itself, but the most popular implementation of it being Microsoft Active Directory.

                Due to an incredible bloat of AD and entire Windows/Azure ecosystem, it has an enormous attack surface (multiply the universe of all windows ecosystem by the decades of old versions being supported for compatibility), and any vulnerability in the ecosystem (past and present) can lead to escalation and compromise of the Active Directory itself.

                so is Kerberos secure? as a protocol it is fine, cause it was developed at MIT by smart people.

                is MSFT AD/Windows ecosystem secure? HELL NO, stay away

          • dcow 3 days ago
            Doesn’t this require the server to consult the IDP on every log in, though, to make sure the id token is valid? One of the staples of ssh from a UX standpoint is that it’s peer to peer.
            • lxgr 3 days ago
              I suppose you could do something based on IDP-signed tokens, e.g. "valid for authentication to service x until <timestamp>"?
              • megous 3 days ago
                This is basically a ssh certificate then.
                • pluto_modadic 1 day ago
                  the difference is in /key management/. Key management is the hard part. Especially keyless SSH management. (things like sigstore's rekor/fulcio remove some complexity here). It is not "just a (manually generated) ssh certificate"
              • XorNot 3 days ago
                Kerberos tickets have timeouts on them already, it's a matter of configuration how long you wait.

                The thing is most enterprises want "user disabled" to be instant.

                Which of course leads to SSH keys all over the place anyway.

      • asmor 3 days ago
        This is the purpose of the not so well known audience claim.

        Though I'd still prefer to authenticate to something like Vault's SSH engine and get a very short lived SSH certificate instead. No new software to install on your servers, just the CA key.

        • c45y 3 days ago
          CA key also allows those servers to avoid reaching out to some central location to validate which I've found to be a nice side bonus for disaster recovery type scenarios.
      • pcthrowaway 3 days ago
        I'd assume the auth handshake would prevent this.

        - client connects to SSH server at IP X.X.X.X or hostname SomeHost

        - redirected to oAuth server

        - Client signs in and receives token scoped to X.X.X.X or hostname SomeHost

        - Client provides token to SSH server

    • rcarmo 3 days ago
      Yeah, I don't like this approach either. There was a lot of plumbing added to sshd to support pluggable auth methods, and having used a few of them (including TOTP, for instance), I am not really a fan of "extending" publickey.

      (Am also not really a fan of having to eventually use a browser for authenticating a terminal session, but that's another problem.)

      • johnisgood 3 days ago
        > (Am also not really a fan of having to eventually use a browser for authenticating a terminal session, but that's another problem.)

        That sounds awful, I hope this is not the direction we are heading towards.

        • cyberax 3 days ago
          You can actually have a fully command-line driven Single Sign-On workflow, even anchored in hardware (TouchID, FIDO tokens, etc.)

          It's not a common way to do it, but it's definitely a possibility.

          • johnisgood 3 days ago
            Like Yubikeys? I've always wanted to get one. My workflow does not require a browser though, thankfully.
        • godelski 3 days ago
          It happens when there's a cloudflared instance. It is quite annoying
          • johnisgood 3 days ago
            That is insane to me. It really requires a browser for a terminal session? No alternatives? Why does it require a browser?
            • godelski 3 days ago
              IDK if there was an alternative. But that's how it worked at a previous employer. You'd ssh, a tab would open in your browser, and you'd need to approve the connection. But this doc I found suggests that's my experience is just how it works (very end)[0]. But note that it says legacy (it did not previously when I was using it)

              [0] https://developers.cloudflare.com/cloudflare-one/connections...

      • out-of-ideas 3 days ago
        not just a browser - but coupled with the javascript-as-an-operatingsystem which first assumes you are a bot, but then you prove to it that you are not. lol
        • mathfailure 3 days ago
          They reject the proofs now: they just show the spinner spinning indefinitely now. CloudFlare is broken and it has widespread so much that it looks like a cancer.
          • immibis 2 days ago
            I just get a red circle with a white line across it (a "no entry" traffic sign) and some message paraphrased as "You are a bot, now fuck off, bot."
    • zokier 3 days ago
      Googling around I see lot of work around this topic. One example is "Moonshot" project from Janet/Géant and the closely related abfab ietf wg
    • notTooFarGone 3 days ago
      I do love this - everything that makes passwords less used makes the world more secure. Everything that is additionally user friendly has the potential to be the new let's encrypt.
      • bayindirh 3 days ago
        I'd love to see the central repository to be breached and tons of computers get new users instantly.

        I mean, the idea is nice. There's an alternative implementation being used already in some parts of the world, but their own OIDC provider of their choice.

        Decentralization is the key here.

        I can neither confirm nor deny the pun is intended.

        • notTooFarGone 3 days ago
          If you lose your root CA certificate you sure are done for too.

          Is it better than passwords? 100% - is it perfect? It does not have to be for a lot of use cases.

        • theamk 2 days ago
          the central directory in this case is google.com (or github.com, or whatever your OIDC provider is)

          If it gets breached, there will be significantly more problems than unauthorized ssh login.

          (and this is a beauty of this compared to something like sshca: there is only one party that you need to trust, and you can choose a party that's unlikely to be breached)

    • gfody 3 days ago
      ssh -k is too enterprise for ̶t̶e̶c̶h̶b̶r̶o̶ ̶s̶t̶a̶r̶t̶u̶p̶s̶ small companies that don't want to setup a kerberos realm
      • mschuster91 3 days ago
        The problem is, even for large companies, Kerberos can be quite the pain. It's fine for fixed position desktop computers physically located on site or at a remote site with a hardware VPN tunnel - that was what it was built for.

        But that is rarely the case any more. People use their own devices (BYOD) that aren't integrated into AD at all, they're using them outside of the office which means there is no VPN available at boot time to deal with token issuance, and the modern "zero trust" crap that uses weird packet filtering black magic instead of proper tun/tap virtual ethernet devices often doesn't play too nice with archaic authentication tools.

        On top of that, implementing support for Kerberos in a Dockerized world is just asking for pain.

      • rcarmo 3 days ago
        If you mention Kerberos to most "security" people these days they will think you're talking about Kubernetes.
        • ziddoap 2 days ago
          I do security (sorry, "security", I guess?), and literally no one thinks this.
        • dcow 3 days ago
          really? that’s a shame

          kerberos is old and clunky but conceptually it got so much right. I’m so sick of the modern idea that i should wake up and babysit my machine through N different oauth dances to log in to all the services i need on a daily basis. once I authenticate once I should be implicitly authenticated everywhere.

          • rcarmo 3 days ago
            That is one of the things that OIDC sorta almost never really managed to pull off consistently.
            • EthanHeilman 3 days ago
              You can do this with OpenPubkey, since the user's client can sign challenges that include the scope of the authentication.

              Doing this on the web requires being really careful design because you can't trust a javascript client sent to you by the party whose scope you want to control. They could just send you a javascript client that approves a different scope. You still need to do something like the OAuth/OIDC origin-based isolation dance.

            • dcow 3 days ago
              And like why not just scrub the `sub` and issue a generic id token (solves idp privacy issues too)… if your service can auth with the claims in the generic token great. if you need more then step up. surely VDCs as a concept have had enough time to mature in the thought space for the industry to be comfortable entertaining this.
            • p_l 3 days ago
              One time I actually implemented that on OIDC... by having the OIDC login page do a kerberos login :D

              this meant that at most you had a short flash on screen for web apps... which is a bit like OIDC/SAML login on windows domains (but I did it with keycloak back then)

              • lq9AJ8yrfs 3 days ago
                Authentik supports this [1] too, kinda. It seems you can set it up to register you based on a bona fide kerberos auth, and logs you in (maybe? would have to check) with kerberos but seems to keep a parallel synchronized authenticator in its own database for OIDC and "modern" auth. Doesn't seem to embed kerberos-isms as "claims" in OIDC either. Might be awesome if it did? Or terrible, depending on how you look at it.

                [1] https://docs.goauthentik.io/docs/users-sources/sources/proto...

                • p_l 3 days ago
                  MIT Kerberos can authenticate using OIDC created token, but in my case I essentially authenticated to Keycloak with HTTP Negotiate with Kerberos, then based on data from LDAP (that was also used by Kerberos) I generated appropriate OIDC token.
    • TacticalCoder 3 days ago
      [dead]
  • traceroute66 2 days ago
    Whilst this is clearly an interesting development, I still prefer SSH CA backed by hardware on both issuer and client side (e.g. Yubikey).

    This is for three reasons:

    First, the SSH-CA+Hardware method does not require call-out to third-party code from SSHD, and thus minimises attack surface and attack vectors.

    Second, the SSH-CA+Hardware method completely prevents key exfiltration or re-use attacks. Yes, I understand that the SSH keys issued by OPKSSH (or similar tools) are short-lived. But they are still sitting there in your the .ssh directory on your local host, and hence open to exfiltration or re-use. Yes it may be a short-timeframe but much damage can easily be done in a short timeframe, for example exfiltrate key, login, install a backdoor and continue your work via the backdoor.

    Finaly, the SSH-CA+Hardware method has fewer moving parts. You don't even need software tools like step-ca. You can do everthing you need with the basic ssh-keygen command. Which means from a sysadmin perspective, perhaps especially sysadmin "emergency break-glass" perspective, you do not need to rely on any third-party services as gatekeeper to your systems.

    • e40 2 days ago
      Can you recommend a setup guide?
      • traceroute66 2 days ago
        > Can you recommend a setup guide?

        Depends how far up the chain you want to go (e.g. use step-ca or not), but at the most primitive level, you are looking at something along the following lines of the below (the below is based off my rough notes, I might have missed something).

        Note that I have ignored any Yubikey setup considerations here like setting PIN, touch-requirement etc. etc.

        I have also assumed plain Yubikey, not the YubiHSM. The YubiHSM comes with SSH certificate signing functionality "out of the box".

        Client Yubikey:

           - Use Yubikey ykman[1] to generate a PIV key according to your tastes
           - Grab the key in ssh format with `ssh-keygen -D $path_to/libykcs11 -e > $client_key.pub`
        
        Issuer Yubikey:

           - Use Yubikey ykman[1] to generate a PIV key according to your tastes
           - Grab the key in ssh format with `ssh-keygen -D $path_to/libykcs11 -e > $issuer_key.pub` (save this for the next step and also put it into your sshd CA config)
           - Sign with the issuer Yubikey with `ssh-keygen -s $issuer_key.pub -D $path_to/libykcs11 -I $whatever_identity -n $principal_list -V +$validity_period $client_key.pub`
        
        (libykcs11 is the yubikey library,it ships with yubikey-piv-tool[2])

        [1] https://docs.yubico.com/software/yubikey/tools/ykman/PIV_Com... [2] https://developers.yubico.com/yubico-piv-tool/Releases/

        ==== Edit to add links to various more verbose discussions on the subject (in no particular order):

            - https://liw.fi/sshca/
            - https://goteleport.com/blog/how-to-configure-ssh-certificate-based-authentication/
            - https://jamesog.net/2023/03/03/yubikey-as-an-ssh-certificate-authority/
            - https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-ssh_certificate_pkcs_11_token
        
        
        Also some verbose discussions on the possibility of doing so with FIDO2. Altough note that the native version of ssh on Apple OS X does not support FIDO2, therefore if you want native Apple support you are best sticking with the PIV method instead.

            - https://developers.yubico.com/SSH/Securing_git_with_SSH_and_FIDO2.html
            - https://medium.com/@harrishcluo/yubikey-ssh-git-super-secure-your-development-workflow-2-2-1899379fb882
            - https://blog.millerti.me/2021/05/16/strengthen-github-ssh-access-with-fido2s-pin-support/
  • tptacek 3 days ago
    This is neat, and more people should be doing things like this. For what it's worth, we use (and like) Teleport, which does certificate-based SSH authentication; an SSO auth gets you a short-lived certificate. It also has the benefits of access control and (most importantly) audit logs; a generic reliable audit log for SSH sessions is a powerful tool to have for compliance stuff, since it transitively gives you an audit log for your CLI tools as well.
    • EthanHeilman 3 days ago
      Is audit logs and access control the main features that would convince you to use Teleport vs something like opkssh? How important is the VPN functionality that lets you get packets to private IPs?
      • tptacek 3 days ago
        I like that it's written in a memory-safe language too, but the killer feature is definitely the audit logs.

        We keep all this stuff behind WireGuard, which is what I would recommend everybody do.

        • atonse 3 days ago
          We use Tailscale for our WireGuard, which further integrates with SSO. So it’s a double whammy.

          But Tailscale ssh has the identity stuff built in too.

    • antman 3 days ago
      The open source version supports only GitHub SSO, is that your provider?
  • bradgessler 2 days ago
    I started building an alternative to SSH at https://terminalwire.com that I think is more suitable for one-off commands run on a developer workstation against a SaaS. In more concrete terms, think of the stripe, heroku, and GitHub CLIs.

    It’s similar to SSH in that it streams stdio from the server to a thin-client, but that’s where the similarities end. It has additional commands, like open a browser to a URL and set “cookies” on the client terminal from the sever.

    When all of those commands are put together, you get something like https://github.com/terminalwire/demodx/blob/main/app/termina... that can open the users browser to an SSO, authorize the CLI, then set a token/nonce/whatever as a “cookie” in the users terminal so they can authenticate and run commands against the SaaS.

    My intention isn’t to replace SSH—it’s still the best protocol for a lot of things, but I have found it cumbersome to use it to build CLIs for SaaS, which is why I built Terminalwire.

    • EthanHeilman 2 days ago
      That's awesome. The web needs more terminal interfaces.

      What did you use to build that slick explainer video?

      • bradgessler 2 days ago
        Screen Studio for macOS
        • EthanHeilman 2 days ago
          Do you recommend it? How easy is it to edit videos and add overlays and images?

          I've been looking for something to record demos of opkssh. What I have now isn't cutting it.

          • bradgessler 2 days ago
            You should try it. It's good for fast stuff, but if you want to have tons of control over editing, you probably wouldn't like it.
    • teruakohatu 2 days ago
      Interesting. Good luck with it.
  • _hyn3 3 days ago
    How does this compare to Userify's plain-jane SSH key technique?

    That agent (Python, single-file https://github.com/userify/shim) sticks with decentralized regular keys and only centralizes the control plane, which seems to be more reliable in case your auth server goes offline - you can still login to your servers (obviously no new users or updates to existing keys). It just automates user and sudo configuration using things like adduser and /etc/sudoers.d. (It also actively kills user sessions and removes the user account when they're deleted, which is great for when you're walking someone out in case they have cron-jobs or a long-running tmux session with a revenge script.)

    This project looks powerful but with a lot of heavy dependencies, which seem like an increased surface area (like Userify's Active Directory integration, but at least that's optional)

    • nullc 3 days ago
      I believe the idea of this scheme is so that the NSA tailored access operations staff embedded in organizations such as google and cloudflare can authorize access without having to individually intercept each server (or jumphost) you own.

      You benefit from more reliable shipping delivery times, no more mysterious city-of-industry->ftmeade->sanfrancisco detours or hardware that fails prematurely due to uncleaned flux or whiskers from implant installations.

      • EthanHeilman 3 days ago
        If you really believe that then help me get the cosigner working with opkssh so even if Google is fully malicious they can't get ssh access.
  • EthanHeilman 3 days ago
    Author of the blog post and main opkssh contributor here, happy to answer any questions.
    • kbolino 3 days ago
      This is very interesting! It looks like there's a config file [1] to set up username <- email,issuer associations. It looks like multiple identities (from different IdPs even) can access the same username, which is useful. There's also a config file for allowed IdPs [2] specifying the expected client IDs (btw, the docs here say all fixed duration options are for "24 hours" even 48h and 1week). This does seem to impose the limitation of exactly one client ID per IdP, which could complicate rotating client IDs.

      Walking this through, given that OpenID Connect is specifically mentioned vs. bare OAuth2, I assume the ID token signatures are themselves verified by looking up ${ISSUER_URI}/.well-known/openid-configuration and following the jwks_uri found there. Is the JWKS response cached? Can it be pre-seeded and/or replaced with an offline copy?

      [1]: https://github.com/openpubkey/opkssh/blob/main/README.md#etc...

      [2]: https://github.com/openpubkey/opkssh/blob/main/README.md#etc...

      • EthanHeilman 3 days ago
        > This does seem to impose the limitation of exactly one client ID per IdP, which could complicate rotating client IDs.

        Thanks for asking this. I don't see any reason why you use the same IdP with two different Client-IDs. I haven't tested this, but it is doesn't work currently, I'd like to add it as a feature.

        Your description of the protocol is spot on. OpenPubkey currently only works with ID Tokens.

        >Is the JWKS response cached? Can it be pre-seeded and/or replaced with an offline copy?

        Currently the JWKS response is not cached, but caching support is a feature we want to add.

        What is the interest in a pre-seeded copy? Availability concerns?

        • kbolino 3 days ago
          I can think of two different uses for multiple client IDs with the same IdP: rotation as I mentioned already (e.g. if client secret leaks, although the secret is not used here), and for when there are multiple domains in use but the IdP clients are configured for internal use only. Both are pretty niche though. I think most IdPs will let you keep two client IDs active at the same time, so the rotation use case might already be covered.

          As for pre-seeded/offline JWKS, yeah the biggest concern is around availability. The pre-seeded case would handle fresh VM setups before networking might be fully configured (though other auth methods as fallback would be good enough in most cases, I think). Completely offline JWKS would also be useful for machines with no outbound connectivity. Both use cases are again pretty niche though.

          • EthanHeilman 3 days ago
            > As for pre-seeded/offline JWKS, yeah the biggest concern is around availability. The pre-seeded case would handle fresh VM setups before networking might be fully configured (though other auth methods as fallback would be good enough in most cases, I think).

            I've been thinking about this as breakglass problem. How do you get into your server if your IDP is offline or you lose internet connectivity. My recommendation has been to have a standard public key SSH account breakglass.

            A pre-seeded JWKS or alternative JWKS would let you have the same policy controls but allow you to create valid keys in extreme circumstances. I really like this.

            I created an issue to track this. Let me knof you want to do the implementation https://github.com/openpubkey/opkssh/issues/44

    • ale42 3 days ago
      The idea of using SSO for SSH sounds interesting for some applications, but does the login process really need to be browser-based? Can't we just have the login prompt in the terminal (without needing to run a headless browser behind-the-scenes of course)? I'm often working on headless machines, and other devices that definitely don't have a browser installed, and it would be pretty painful to use (not to mention that when working in a terminal, I find very annoying to have to switch to a browser, or to any other window by the way).
      • EthanHeilman 3 days ago
        All the major OpenID Provider want you to use a browser so that users aren't exposing their raw credentials like passwords to an application. We did have an experimental version of this working in a terminal for integration tests a long time ago, but Google views that as malicious behavior and tries to prevent you from doing that. It turns into an arms race with the IdPs.

        The good news is you only have to login through the browser once in the morning and then you can use the generated ssh key all day long.

        • fc417fc802 3 days ago
          The browser is also an application. Of course I somehow doubt ELinks will be working with the Google auth page anytime soon.

          It seems that the goal is to minimize the number of applications into which users view entering their credentials as "normal". The obvious missing piece then is some standardized FOSS project to handle CLI login.

          Of course that is also unlikely to go over well since what the centralized providers actually want (which is fundamentally incompatible with user freedoms) is attestation. Arbitrarily blessing a few browser implementations that are extremely difficult to compile on your own is just a roundabout method to approximate that (IMO anyway).

          Edit: It seems like a mistake to me to conflate anit-phishing efforts and IdP. Anti-phishing should be via hardware tokens or TOTP or whatever. IdP should be about large corporations managing user accounts or about individuals gaining convenience, including by serving as an adapter so that the latest popular standard can be used without each downstream service needing to adopt it.

          • linkregister 3 days ago
            I wrote a proprietary variation of this project for a large organization and used a similar approach to the author. The primary reason I used the browser for authentication was to take advantage of FIDO U2F (Yubikeys). Openssh does support U2F authentication, but it requires the user to sign a public key with the hardware device, forcing the organization to again store users' public keys.

            The users disliked the redirects to and from the MFA provider just to log in and receive a signed SSH certificate, but there was no practical way to perform logins in the terminal without creating a whole new protocol and expanding the timeline of the project.

            • fc417fc802 2 days ago
              > without creating a whole new protocol

              Doesn't it "just" require a CLI client that can speak both FIDO U2F and whatever the MFA provider uses? But yeah point taken.

              Even if Google and Microsoft don't support it could a FOSS CLI client capable of speaking all the relevant protocols have resolved your issue? With OpenPubkey gaining support it seems to me that a service could potentially support exclusively that and in doing so cover all necessary auth methods simultaneously, at least assuming the provider is comfortable self hosting an IdP solution.

              • linkregister 2 days ago
                Indeed I would have preferred to interact with the U2F system libraries, create a challenge-response protocol for the registered U2F device to authenticate, and open source the effort. But I was not skilled enough at pitching the expanded scope. It would have been a more interesting project though!

                If I take a sabbatical then writing a client like that sounds interesting. Part of it would be inventing a new MFA provider, as the existing MFA APIs don't expose a "sign with U2F device" authentication method, as far as I know.

                • fc417fc802 2 days ago
                  At least pocket id and keycloak support fido2/webauthn; I'm sure there are others. I'm not sure how involved a CLI interface for that flow would be though. (Maybe someone has already done it? If so I didn't immediately come across it.)

                  Keycloak for example provides an API for the password flow. So webauthn via API definitely isn't too far off of what it already provides.

    • notTooFarGone 3 days ago
      We are currently struggling with the exact ergonomics of user friendly and secure ssh and I just wanted to say you helped big time here!

      Will test this for my current use-case and hopefully contribute in the future!

      • EthanHeilman 3 days ago
        Excellent, feel free to email me at ethan.r.heilman[at]gmail[]com. Happy to in anyway
    • apitman 2 days ago
      1. Can I use my own indiehosted OP?

      2. Looks like this streamlines the server trusting the client. Does it do anything for the client trusting the server? I feel gross saying it, but I almost wonder if we should be moving towards some sort of TLS-based protocol for remote login rather than doubling down on SSH, due to the assurances provided by CAs.

    • deng 3 days ago
      What would you say is the advantage of this approach over integrating OIDC into a separate service, like what Ubuntu is trying with authd?

      (see https://ubuntu.com/blog/authd-oidc-authentication-for-ubuntu...)

      • kbolino 3 days ago
        Some key differences I observe:

        OPKSSH covers only logging in through SSH to an existing user account, while authd covers all forms of login (console, graphical, SSH) and user/group management. The latter makes it much more of a full AAA product rather than just a new way to login with SSH. This means it's a deeper investment, with implications for network file systems (as covered in the docs), while OPKSSH can be added on top of just about any existing infrastructure.

        In terms of process, authd uses the Device Authorization Flow to handle logins, which is more vulnerable to phishing. It also requires both sides to have online access to the IdP, whereas the ID token-based approach of OPKSSH allows the authenticating side to have no (*) or limited outbound connectivity. Also, authd seems to support only Microsoft and Google as IdPs right now, whereas OPKSSH (since it builds on OpenPubkey) supports any OpenID Connect IdP.

        * = In theory, at least; the current implementation doesn't fully deliver on this, though the one online resource it does need is fairly static and quite cacheable

      • EthanHeilman 3 days ago
        Thanks for that link, I hadn't heard about this. It looks really cool and I am glad to see ubuntu doing this. I should send ubuntu an email.

        Sadly I can not offer an opinion as I don't know how authd works. I intend to find.

  • znpy 3 days ago
    People could already cook up something similar using AuthorizedKeysCommand and similar.

    As long as you can upload some kind of key to an external system (eg: short-lived ssh certificate) you can then query that certificate via AuthorizedKeysCommand.

    Edit: just saw the comment by the author of the post (https://news.ycombinator.com/item?id=43471793). Yep, it's AuthorizedKeysCommand.

    Good job!

    • EthanHeilman 3 days ago
      I gotta say, I love AuthorizedKeysCommand, it is the most clever configuration option I've seen in a protocol!

      If you just try to stuff a ID Token into an SSH key and use AuthorizedKeysCommands you introduce replay attacks because the SSH server can pull your ID Token out and stuff it into another SSH key and replay it to other SSH servers to impersonate you. Opkssh doesn't have this weakness because it used OpenPubkey [0].

      The real trick here is OpenPubkey. OpenID Connect gives you ID Tokens which don't contain public keys. OpenPubkey tricks your OpenID Connect IDP into including a public key you choose in the ID Token it issues. This turns ID Tokens into certificates without requiring any changes to the IDP. This makes ID Tokens safe to use in SSH.

      [0]: https://github.com/openpubkey/openpubkey/

      • ryao 3 days ago
        OpenSSH is full of clever ideas.
      • aftbit 3 days ago
        How does this prevent replay attacks, either by a malicious SSH server proxying the auth flow from another machine, or by a malicious server pulling out the signed IdP claims and passing them to another OpenID Connect target?
        • EthanHeilman 3 days ago
          > a malicious server pulling out the signed IdP claims and passing them to another OpenID Connect target

          The signed IdP claims aren't a secret. In OpenPubkey, they function like certificate for the user's public key. This makes them useless for replay attacks in opkssh.

          The signed IdP claims are also scoped to a Client-ID specific for opkssh, so non-opkssh OpenID Connect services will reject them.

          • aftbit 3 days ago
            Sure, but couldn't a malicious SSH server use this key to proxy a connection to another opkssh server?
            • EthanHeilman 3 days ago
              No, the SSH server only learns your public key and a signature specific to the SSH server generated in the SSH handshake. A malicious server would need your private key to successfully authenticate to another SSH server.
  • samcat116 3 days ago
    I can't tell the benefits of this vs running an SSH CA that supports OIDC. In that scenario, the server just needs to trust the CAs key, rather than running some sort of verifier.
    • EthanHeilman 3 days ago
      The benefits of this is that you don't have the attack surface of an SSH CA. If you do this with an SSH CA that supports OIDC, if either the IDP or the SSH CA are compromised then security is lost.

      With OpenPubkey and by extension opkssh, your IDP is functioning like the SSH CA by signing the public key that you make the SSH connection with. Thus, you have one fewer trusted party and you don't have maintain and secure an SSH CA.

      Beyond this, rotating SSH CAs is hard because you need to put the public key of the SSH CA on each SSH server and SSH certs don't support intermediate certificates. Thus if you SSH CA is hacked, you need to update the CA public key on all your servers and hope you don't miss any. OpenID Connect IDPs rotate their public keys often and if they get hacked and they can immediately rotate their public keys without any update to relying servers.

      • c45y 3 days ago
        You can have multiple trusted CAs which I've found to make rotation a non issue with tooling like Ansible etc.

        New CA is minted, public key is added to the accepted list, client signing start using the new CA and you remove the old after a short while.

        If missing servers is a common problem it sounds like there are some other fundamental problems outside just authenticated user sessions.

        • EthanHeilman 3 days ago
          That's smart! You could probably automate that using a cron job pulling the latest CA public keys so servers automatically rotates CA public keys every few days.

          > If missing servers is a common problem it sounds like there are some other fundamental problems outside just authenticated user sessions.

          On one hand yes, on the other hand that is just the current reality in large enterprises. Consider this quote from Tatu Ylonen's (Inventor of SSH) recent paper [0]

          “In many organizations – even very security-conscious organizations – there are many times more obsolete authorized keys than they have employees. Worse, authorized keys generally grant command-line shell access, which in itself is often considered privileged. We have found that in many organizations about 10% of the authorized keys grant root or administrator access. SSH keys never expire.”

          If authorized keys get missed, servers are going to get missed.

          opkssh was partially inspired by the challenges presented in this paper.

          [0]: Challenges in Managing SSH Keys – and a Call for Solutions https://ylonen.org/papers/ssh-key-challenges.pdf

    • johnmaguire 3 days ago
      Years ago, I tried building something like this using ProxyCommand to try to fetch the SSH certificate "just-in-time" without having to run a command first, but unfortunately the ordering of OpenSSH was such that ProxyCommand ran after checking the disk for SSH certs/keys. :(
      • EthanHeilman 3 days ago
        I got this working at one point.

        The trick is to use your SSH config to intercept SSH connections so the got to a local SSH server, this triggers ProxyCommand and let's you create the cert and then forward those packets into an outgoing SSH connection you don't intercept.

        SSH --> Local SSH Server --> ProxyCommand (create cert) --> SSH --> Remote SSH Server

      • Foxboron 2 days ago
        You could use `host match exec` instead of `ProxyCommand`. I believe it will run before you end up checking for files on disk.
        • johnmaguire 2 days ago
          Hey, thanks for that! I didn't come across that back then. Looks intriguing.
  • cjcampbell 1 day ago
    Definitely interested to kick the tires and compare to some of the other solutions out there. As others mentioned, you lose some benefits of an OIDC-integrated SSH CA, but that’s a reasonable trade off in order to reduce complexity for many use cases.

    A missing piece of the puzzle for me is general OSS tooling to provision the Linux OS users. While it works in some environments to grant multiple parties access to the same underlying OS users, it’s necessary (or at least easier) in others to have users accessed named user accounts.

    Step-ca makes good use of NSS/PAM to make this seamless when attached to a smallstep account (which can be backed by an IdP and provisioned through SCIM). While I could stand up LDAP to accommodate this use case, I’d love a lightweight way for a couple of servers to source users directly from the most popular IdP APIs. I get by with a script that syncs a group every N minutes. And while that’s more than sufficient for a couple of these use cases, I’ll own up to wanting the shiny thing and the same elegance of step-ca’s tooling.

  • password4321 3 days ago
    Semi-related OpenSSH fork supporting auth with X.509 certificate CA: https://roumenpetrov.info/secsh/

    And a walkthrough (2020): http://tech.ciges.net/blog/openssh-with-x509-certificates-ho...

  • Aeolun 3 days ago
    I think it’s kinda funny that a standard to return a public key in a token, and a server side auth binary that uses that to log you into SSH, are presented here as something groundbreaking.

    I’m not trying to downplay actually doing it, but it’s been possible since openid connect was invented.

    • EthanHeilman 3 days ago
      > It’s been possible since openid connect was invented.

      It has been possible since OpenID Connect was invented but figuring out how to get a public key into an ID Token without having to update IDPs or change the protocol in anyway was not known until we published OpenPubkey[0]. OpenID Connect was not designed to do this.

      Figuring out how to smuggle this additional information into OpenSSH without requiring code changes or adding a SSH CA required a significant amount of work. I could be wrong but as far as I am aware the combined use of smuggling data in SSH public keys with AuthorizedKeyCommand to validate that data was not done until opkssh.

      This was three years of careful work of reading through OpenID Connect specs, SSH RFCs, reading OpenSSH source code to get this to be fully compatible with existing IDPs and OpenSSH.

      [0]: OpenPubkey: Augmenting OpenID Connect with User held Signing Keys (2023) https://eprint.iacr.org/2023/296

      • Aeolun 2 days ago
        I didn’t mean to downplay the amount of work involved. It’s just that it feels to me like the ‘solution’ to problems like these seem very simple when that work has been put in.

        It’s just that nobody really wants to (OpenID connect became a lot easier to understand when I read the spec, but I never got anywhere close to enjoying it), hence, we didn’t have this until now.

        • EthanHeilman 2 days ago
          Completely agree. The goal is simple, plain and obvious, but the tools and protocols make it tricky to pull off.
    • motoboi 3 days ago
      Openid is just a bunch of http requests and browser redirects, if you think about it.
  • bzmrgonz 3 days ago
    has this project been audited tho? Because it seems to me we are shifting the authentication process to opkssh, so the question then becomes, how secure is the code-build?
  • somat 3 days ago
    Fair enough, I can see how this would be useful but I have to admit I was hoping it would be the opposite, how to log into a web page with a ssh key.
  • atonse 3 days ago
    How does this compare to Tailscale SSH? Will the two eventually be combined in some way?

    I think tailscale SSH requires you to run their daemon on the server, correct?

    • EthanHeilman 3 days ago
      When I looked at this previous Tailscale required two trusted parties. I haven't looked at the tailscale protocol details in two years so maybe it has changed.
  • dizhn 2 days ago
    I like that they used (and abused) standard ssh server features to implement this.

    Is anybody aware of something like this that can be automated for things like ansible or see a way to use this there?

    • EthanHeilman 2 days ago
      > Is anybody aware of something like this that can be automated for things like ansible

      Doesn't ansible already get you all of this? What is the feature gap you are looking to fill?

      That said you definitely use opkssh in automation. OpenPubkey already supports the github-action and gitlab-CI OpenID Providers so in theory you could use opkssh to let a github-action or gitlab-CI workflow ssh into servers under that workflows identity. That is, have a policy on your SSH server that allows only "workflows from repo X triggered by a merge into main ...".

      Additionally you can always do machine-identity using OpenID Connect either by running your own JWKS server.

      While this works in OpenPubkey, we haven't added this to opkssh yet but we have an issue for it. If you want support for this add your usecase as a comment

      https://github.com/openpubkey/opkssh/issues/51

  • godelski 3 days ago
    I'm not really sure I like SSO and I'm not convinced we should expand this technology. I'm not a security person but most of my concerns aren't actually security either.

    My big concern is how we centralize accounts. Not just data access, but like how EVERYTHING is tied to your email. Lose access? You're fucked. Worse, it's very very hard to get support. I'm sure everyone here is well aware of the many horror stories.

    Personally I had a bit of a scare when I switched from Android to iPhone. My first iPhone needed to be replaced within 2 weeks and I hadn't gotten everything covered over and not all my 2FAs had transferred to the new phone. Several had to be reset because adding a new 2FA OTP voided the old ones. And since for some reason bitwarden hasn't synched all my notes I had to completely fall back on a few. Which made me glad I didn't force 2FA on all accounts (this is a big fail!!!)

    Or even this week, Bitwarden failed on me to provide security keys to sites. The popup would appear but the site had already processed the rejection. Took a few restarts before it was fixed.

    The problem I'm seeing here is if we become so dependent on single accounts then this creates a bigger problem than the illness we're trying to solve. While 90% of the time things are better when things go wrong they go nuclear! That's worse!

    Yeah, I know with SSO you don't have to use Google/Apple and you can be your own authority. But most people aren't going to do that. Hell, many sites don't even offer anything except Google and Apple! So really we're just setting up a ticking time bomb. It'll be fine for 99% of people 99% of the time, but for the other cases we're making things catastrophic. There's billions of people online so even 1% is a huge number.

    Even worse, do we trust these companies will always be around? In your country? To give you proper notice? Do you think you'll even remember everything you need to change? These decisions can be made for you. Even Google accidentally deletes accounts.

    So what I really want to see is a system that is more distributed. In the way that we have multiple secure entries. Most methods today are in the form of add 2FA of their choosing and suggest turning off fallback, which is more secure but can fuck you over if it fails. So if we go SSO then this shouldn't replace keys, like the article suggests. Keys are a backup. There should be more too! But then you need to make people occasionally use them to make sure they have that backup. And yes, I understand the more doors there are the bigger attack surface but literally I'm just arguing to not put all our eggs in one basket

    • EthanHeilman 3 days ago
      I work on opkssh and I agree with everything you have just said.

      The value of opkssh makes sense in an environment in which already have OpenID Connect as the foundation for identity in your system.

      OpenPubkey[0], the protocol opkssh is built on, supports cosigners, which parallel identity attestations. OpenPubkey is currently is designed to use cosigners purely for security, i.e., to remove the IDP as a single point of compromise.

      OpenPubkey is built on JSON Web Signatures and JSON Web Signatures can support any number of signers. One could easily extend OpenPubkey to something like, 0x1234 is Alice's public if her public key signed by 7 out of 10 identity cosigners.

      What you are describing is the same dream I have: decentralized, secure, human-meaningful names. This is hard to build [1] and you have to start sometime, so I started with the existing identity provider infrastructure but that the beginning. If you are interested in building this future, come work on https://github.com/openpubkey/openpubkey/

      [0] OpenPubkey: Augmenting OpenID Connect with User held Signing Keys https://eprint.iacr.org/2023/296

      [1] Zooko's triangle is a trilemma of three properties that some people consider desirable for names of participants in a network protocol https://en.wikipedia.org/wiki/Zooko%27s_triangle

      • godelski 3 days ago
        Thanks for the response! And glad to hear I'm not just going crazy here hahaha.

        I'm glad to hear that the protocol supports cosigners. (Next part is definitely described poorly) Is there going to be expansion so that there are "super authorities"? I'm thinking something like how tailscale's taillock works. So there are authorities that can allow access but super-authorities that allow for the most sensitive operations.

        I am interested but like many, have other priorities. Unfortunately I think for now I'll be off on the sidelines, but I do like to learn more and I appreciate your explanations.

        • EthanHeilman 3 days ago
          Super authorities are a neat idea. It would be nice to have something like 2of2 permissions where two parties have to both ok a change for policy to accept it.
  • kipz 3 days ago
    FWIW, I think this is really cool! I'm going to give it a spin!
  • naikrovek 3 days ago
    I swear to god people make things more complex solely because they plan to yank these things out from under us later.
  • Hizonner 3 days ago
    [Note on edit: this is wrong]

    ... apparently in the form of a whole new implementation.

    Not realistic. If it's not in OpenSSH, it effectively doesn't exist.

    • EthanHeilman 3 days ago
      Author of the blog here and main opkssh contributor. The title is wrong but this is OpenSSH and not a whole new implementation.

      opkssh uses the OpenSSH AuthorizedKeysCommand configuration option like AWS instance-connect to add OpenID Connect validation to OpenSSH authentication.

      ``` opkssh login ```

      Generates a valid ssh key in `~/.ssh/`

      Then run bog standard ssh or sftp

      ``` ssh user@hostname ```

      ssh will pull this ssh key from `~/.ssh/` and send it to sshd running on your server. If this key isn't in an AuthorizedKeys file sshd will send it to the AuthorizedKeysCommand which if configured to be `opkssh` will check your OpenID Connect credentials.

      • mdaniel 3 days ago
        I'm surprised it defaults to writing out key material into the filesystem[1] when SSH Agent has existed for quite a while. This use case seems especially relevant to sticking them in the agent given that (IIUC) these are short-lived certs anyway, so if your agent bounced you'd just get a fresh one without drama

        I do see <https://github.com/openpubkey/opkssh/issues/6#issuecomment-2...> so I'm glad it's conceptually on the radar, I'm just saying I'm surprised it wasn't part of Cloudflare's best practices already

        1: https://github.com/openpubkey/opkssh/blob/v0.3.0/commands/lo...

        • EthanHeilman 3 days ago
          Excellent point, SSH agent is a feature I've wanted to build for a while now but there was higher priority features. It will probably be included in the next major release. Would you put up for submitting it as a PR?
          • mdaniel 3 days ago
            If I were still using SSH, maybe[1] but I'm thankful that I haven't used SSH in several years. I guess I also dodged a bullet by getting out before the Vault rug pull, since that would have made my life painful

            1: although I don't think I'm the target audience for trail-blazing SSH auth; am a much, much bigger fan of just using X509 CA auth using short-term certs; it's much easier to reason about IMHO

            • ale42 3 days ago
              Out of curiosity, what are you using now? Or do you mean you don't need remote terminals any more because you work on other stuff?
              • mdaniel 3 days ago
                All SSM, all the way. I even gravely considered using their IAM Anywhere capabilities to jump onto Azure or GCP instances, before that project was overcome by events

                I'm cheating you a little bit, though, because for the most part once a VM gets kubelet on it, I'm off to the races. Only in very, very, very bad circumstances does getting on the actual Node help me

                I also recently have started using <https://docs.aws.amazon.com/systems-manager/latest/userguide...> to even get sequestered cluster access via $(aws ssm start-session --document-name AWS-StartPortForwardingSessionToRemoteHost) although the "bootstrapping" problem of finding the instance-id to feed into --target is a pain. I wish they offered https://docs.aws.amazon.com/systems-manager/latest/userguide... in the spirit of "yeah, yeah, just pick one" versus making me run $(aws ec2 describe-instances --filter | head -n1) type thing

            • EthanHeilman 3 days ago
              OpenPubkey does support X.509 using an X.509 extension.
          • linkregister 3 days ago
            That sounds like an interesting feature to write. Is there an open issue for it?
      • zaat 3 days ago
        Just to make sure, opkssh supports OpenID for sftp as well?
    • dugite-code 3 days ago
      From the article "OPKSSH does not require any code changes to the SSH server or client."

      Looks like this is a sidecar application. So potentially very useful, also potentially very brittle.

    • asjfkdlf 3 days ago
      It states that it doesn’t require major changes because it all happens under the SSH protocol. A new program is needed on the client to sign in and you can already run a custom program on the server to authorize the key.
      • Hizonner 3 days ago
        I stand corrected. Still not sure I'd want to expose my SSH infrastructure to the massive kludge tower that is OpenID, but it not being its own implementation of the actual SSH protocol is a huge plus.
        • dugite-code 3 days ago
          I imagine it wouldn't be for the system admins to use, it's for all the other users who can use terminal applications but always treat ssh keys as a nuisance and try to avoid them as much as possible.
      • rcarmo 3 days ago
        Lost me at "A new program is needed on the client". Completely.
  • notorandit 3 days ago
    Yet another tool on top of other ones.

    Let's hope no backdoor will be added there.

  • RKFADU_UOFCCLEL 3 days ago
    [flagged]
  • ciaovietnam 3 days ago
    Now I have to trust OpenPubkey, hoping it wont get hacked. No way will I add this to my servers, I will keep using the long live public key.
    • bayindirh 3 days ago
      If you want to roll your own, here's another implementation which people already use, with their own OpenID Connect infrastructures.

      You can deploy and use in a completely closed system.

      https://github.com/EOSC-synergy/ssh-oidc

      • EthanHeilman 3 days ago
        That's neat, I've added it to my reading list.
    • EthanHeilman 3 days ago
      OpenPubkey is software and opensource. All software has vulnerabilities but we aren't a service or SaaS or anything.
  • jmclnx 3 days ago
    Looks like yet another patch to OpenSSH that the OpenBSD people will stay away from as far as the can. What can go wrong ?

    At least that is my belief, do people here think my speculation is correct ? I checked https://undeadly.org and no mention of anything like this.

    FWIW, I will never use this.

    • EthanHeilman 3 days ago
      It is not a patch. It doesn't require any code changes to OpenSSH.

      opkssh uses AuthorizedKeysCommand field in sshd_config. OpenBSD added this config field to OpenSSH to enable people to do stuff like opkssh or instance-connect without needing to do code patches. OpenSSH is really smart about enabling functionality like this via the config.