MTOTP: Wouldn't it be nice if you were the 2FA device?

(github.com)

74 points | by brna-2 12 hours ago

15 comments

  • perching_aix 18 minutes ago
    I've been pondering about something like this for a while, nice to see someone who didn't give up after seeing how demanding actual crypto is, like I did.

    I now wonder if it's possible to store a random value in one's head without it being eavesdroppable. Humans don't really do random, but it's essential for auth.

  • crote 11 hours ago
    What makes this 2FA? It's "something you know, plus mental labor", which makes it a password.

    2FA is "something you have" (or ".. you are", for biometrics): it is supposed to prove that you currently physically posses the single copy of a token. The textbook example is a TOTP stored in a Yubikey.

    Granted, this has been watered down a lot by the way-too-common practice of storing TOTP secrets in password managers, but that's how it is supposed to work.

    Does your mTOTP prove you own the single copy? No, you could trivially tell someone else the secret key. Does it prove that you currently own it? No, you can pre-calculate a verification token for future use.

    I still think it is a very neat idea on paper, but I'm not quite seeing the added value. The obvious next step is to do all the math in client-side code and just have the user enter the secret - doing this kind of mental math every time you log in is something only the most hardcore nerds get excited about.

    • fxj 11 hours ago
      TOTP is also just password + some computation. So where is the difference? There is a lot of security theatre around TOTP with the QR code and then need of an app but you can write a 8 liner in python that does the same when you extract the password out of the QR code.

         import base64
         import hmac
         import struct
         import time
      
         def totp(key, time_step=30, digits=6, digest='sha1'):
              key = base64.b32decode(key.upper() + '=' \* ((8 - len(key)) % 8))
              counter = struct.pack('>Q', int(time.time() / time_step))
              mac = hmac.new(key, counter, digest).digest()
              offset = mac[-1] & 0x0f
              binary = struct.unpack('>L', mac[offset:offset+4])[0] & 0x7fffffff
              return str(binary)[-digits:].zfill(digits)
      
      
      https://dev.to/yusadolat/understanding-totp-what-really-happ...
      • susam 1 hour ago
        Original source of the 8 liner Python code: https://github.com/susam/mintotp/blob/main/mintotp.py
      • crote 10 hours ago
        You are supposed to store the password in a Secure Enclave, which you can only query for the current token value. You are also supposed to immediately destroy the QR code after importing it.

        As I already mentioned, the fact that people often use it wrong undermines its security, but that doesn't change the intended outcome.

        • gruez 1 hour ago
          >You are supposed to store the password in a Secure Enclave,

          That's at best a retcon, given given that the RFC was first published in 2008

          >You are also supposed to immediately destroy the QR code after importing it.

          Most TOTP apps support backups/restores, which defeats this.

          • craftkiller 1 hour ago
            > That's at best a retcon, given given that the RFC was first published in 2008

            How so? Apple didn't invent the idea of a secure enclave. Here is a photo of one such device, similar to one I was issued for work back in ~2011: https://webobjects2.cdw.com/is/image/CDW/1732119

            No option to get the secret key out. All you can get out is the final TOTP codes. If anything, having an end-user-programmable "secure enclave" is the only thing that has changed.

            I think they probably meant "Secure Enclave" in the same way that people say "band-aid" instead of "adhesive bandage", "velcro" instead of "hook and loop fastener", and "yubikey" instead of "hardware security token".

        • alt227 10 hours ago
          IMO if it is possible to use a system wrongly which undermines its security, it is already broken.
          • lmz 10 hours ago
            This is how we get sites that block software tokens and only allow a whitelist of hardware based tokens.
          • Jean-Papoulos 9 hours ago
            I can chuck a brick at your head. Clearly the brick is broken
            • alt227 5 hours ago
              Breaks are meant to be built with, not thrown at heads.

              If you build with the brick properly you will have a great wall, if you dont then it will fall down. Pretty simple.

          • lazide 10 hours ago
            There is no system which cannot be used wrongly in a way which undermines it’s security.
            • alt227 5 hours ago
              OP:

              > the fact that people often use it wrong undermines its security

              • lazide 5 hours ago
                Yes, that is what I am replying too.

                That applies to everything.

                • alt227 4 hours ago
                  Fair enough, I agree.
          • justincormack 7 hours ago
            I mean, TOTP is one of the earliest 2 factor systems, and works least well.
      • elderlybanana 9 hours ago
        Yes, TOTP is a secret + computation, and generating it is trivial once you have the secret. The security difference is that the TOTP secret is separate from the user’s password and the output is short-lived. Each of the two factors address different threat models.
      • Ferret7446 4 hours ago
        Exactly, which is why TOTP is "weak". "Real" 2FA like FIDO on a security key makes it much harder.
        • ACCount37 1 hour ago
          TOTP is the "good enough" 2FA.

          If I managed to intercept a login, a password and a TOTP key from a login session, I can't use them to log in. Simply because TOTP expires too quickly.

          That's the attack surface TOTP covers - it makes stealing credentials slightly less trivial.

    • ulrikrasmussen 11 hours ago
      In practice most TOTP implementation also do not prove that you have a device which is the sole owner of the secret. Except for proprietary app-based solutions the usual protocol is to display a QR code which just encodes the secret in plain text.

      As long as you never enter the secret anywhere but only do the computation is your head, this is just using your brain as the second factor. I would not call this a password since it is not used in the same way. Passwords are entered in plain text into fields that you trust, but that also means that passwords can be stolen. This proves that you are in possession of your brain.

      • swiftcoder 10 hours ago
        > Passwords are entered in plain text into fields that you trust, but that also means that passwords can be stolen

        The only difference here is that you are hashing the password in your head, instead of trusting the client to hash it for you before submitting it to the server.

        Which makes the threat model here what, exactly? Keyloggers, or login pages that use outdated/insecure methods to authenticate with the server?

        • ulrikrasmussen 10 hours ago
          Yes, but also plain guessing since passwords are usually chosen by the user and not generated by the server like TOTP secrets. Also phishing attacks tricking users into entering their passwords in fake login pages, and stolen password databases.
          • swiftcoder 7 hours ago
            > Yes, but also plain guessing since passwords are usually chosen by the user and not generated by the server like TOTP secrets.

            If we were talking a >256-bit secret, I'd buy this, but in the human-calculated case I don't see how it actually helps with this, because you've substituted a ~8 character password for a 6 digit number, which is significantly less search space to brute-force.

            > Also phishing attacks tricking users into entering their passwords in fake login pages

            yes, this is more-or-less a subset of the "keylogger/insecure login page" case

            > and stolen password databases

            There's still a server-side TOTP secret database to be stolen, no? And normally that would be hard to reverse-engineer the actual secret from, but again, you've shrunk the search space down to 1,000,000 entries, which is trivial to brute force.

    • newpavlov 10 hours ago
      >The obvious next step is to do all the math in client-side code and just have the user enter the secret

      https://en.wikipedia.org/wiki/Password-authenticated_key_agr...

    • brna-2 11 hours ago
      Time based skew makes it a changeable second factor, additional changeable pass makes it the second factor, Also - if the first factor is a password manager or ssh key - this is the second factor.

      The idea of it was so neat to me, I just had to thinker with it.

    • josephg 11 hours ago
      > 2FA is "something you have" (or ".. you are", for biometrics): it is supposed to prove that you currently physically posses the single copy of a token. The textbook example is a TOTP stored in a Yubikey.

      No, 2FA means authentication using 2 factors of the following 3 factors:

      - What you know (eg password)

      - What you have (eg physical token)

      - What you are (eg biometrics)

      You can "be the 2FA" without a token by combining a password (what you know) and biometrics (what you are). Eg, fingerprint reader + password, where you need both to login.

      • crote 10 hours ago
        Of course, but in most applications the use of a password is a given, so in day-to-day use "2FA" had come to mean "the other auth method, besides your password".

        Combine that with the practical problems with biometrics when trying to auth to a remote system, and in practice that second factor is more often than not "something you have". And biometrics is usually more of a three-factor system, with the device you enrolled your fingerprints on being an essential part of the equation.

      • moralestapia 55 minutes ago
        This.

        GP ignores the conventions of the field.

    • rcxdude 11 hours ago
      The single copy part would be a lot more common if it was widely supported to have multiple tokens registered to an account.

      And the main point (though I agree that it doesn't make it 2FA), is to not have the secret be disclosed when you prove that you have it, which is what TOTP also achieves, which makes phishing or sniffing it significantly less valuable.

      • crote 10 hours ago
        Are there any mainstream websites which only allow a single TOTP token to be enrolled? I can't remember having ever run into that issue. I do recall it occasionally being an issue with Passkeys, though.

        The non-disclosure is indeed neat, but the same can be achieved with a password. For example: generate public/private keypair on account creation. Encrypt private key with user password. Store both on server. On auth, client downloads encrypted priv key, decrypts it with user-entered password, then signs nonce and provides it to server as proof of knowledge of user password.

        • fc417fc802 10 hours ago
          You don't need to involve a private key there. Modern password authentication algorithms never reveal the bare secret (outside of initial registration ofc). For example, PAKE uses Diffie-Hellman coupled with the (salted) password hash to independently derive the same session key on both sides of the connection.

          AFAIK the primary technical concerns are insecure storage by the server (bad hash or salt) or keylogging of the client device. But the real issue is the human factor - ie phishing. As long as the shared secret can't be phished it solves the vast majority of real world problems.

          Point being, TOTP on a rooted phone handled by a FOSS password manager app whose secret store the end user retains full access to will successfully prevent the vast majority of real world attacks. You probably shouldn't use a FOSS password manager on a rooted device for your self hosted crypto wallet though.

          • crote 9 hours ago
            Ah, of course! I did initially consider DH as example, but discounted it because of the need for the server to store the plaintext password - the fact that you can just hash it first completely slipped my mind.

            I completely agree about phishing being the main attack vector. However, I do think malware is a not-too-distant second - which makes having a single device contain both your password and TOTP secret a Really Bad Idea. Having not-perfectly-secure TOTP codes only your phone and a password manager DB only on your desktop is a pretty decent solution for that.

        • rcxdude 10 hours ago
          I would say the majority of services I have TOTP set up for only support one token at a time. It's only the bigger, techier services that have support for multiple.
      • fc417fc802 10 hours ago
        I guess it's a spectrum. At one extreme is the most physically resistant hardware token in existence. On the other end is a password transmitted in plaintext.

        An ssh keyfile requires an attacker to break into the device but is likely fairly easy to snag with only user level access.

        Bypassing a password manager that handles TOTP calculations or your ssh key or similar likely requires gaining root and even then could be fairly tricky depending on the precise configuration and implementation. That should generally be sufficient to necessitate knowledge of the master password plus device theft by an insufficiently sophisticated attacker.

        Given TOTP or an ssh key managed exclusively by a hardware token it will be all but impossible for anyone to avoid device theft. Still, even TPMs have occasionally had zero day vulnerabilities exposed.

    • madeofpalk 10 hours ago
      I don't think OP claimed it adds value.

      > It explores the limits of time-based authentication under strict human constraints and makes no claims of cryptographic equivalence to standard TOTP.

      I think they're just having fun.

    • PunchyHamster 10 hours ago
      misunderstanding of 2FA annoys me.

      Like, banking site requiring phone's 2FA (whether actual or SMS), okay, you have to know password and access to the device or at least a SIM card so 2 things need to be compromised. Computer vulnerable, no problem, phone vulerable, no problem, both need to be vulnerable to defeat it

      ...then someone decided to put banking on the second factor and now phone has both password and token (or access to SMS) to make a transaction, so whole system is one exploit away from defeat.

  • EPWN3D 4 hours ago
    If you can be tied to a chair and beaten with a rubber hose until you produce the token, it's just a password, albeit one that rotates.

    TOTP works because you have to possess the secure device at the time you're authenticating. If you don't have the device, then no amount of time with the rubber hose can make you cough up the required token.

  • jrm4 2 hours ago
    So, in my head, once I heard the idea, I started thinking of something WAY different, and maybe its worth considering. I was thinking something like a combination "security question," "captcha" and "secondary identifier" (whatever the thing that google et al do when they tell you to match the picture on your phone to complete the login)

    I don't know, something like "name the fruits that correspond to your first school colors" or similar

    • mindslight 2 hours ago
      Maybe some type of long physical probe you have to sit on and it generates a hash from the exact shape of your "cavity".

      Seriously, am I the only one who was happier without any of this "2FA" crap? VPS/Domain/Google with a hardware token is the one narrow scope where I see any value, and even those I could do without. Every other site is just a non-consensual nagging that hassles me when logging in. Bank accounts are the worst, as every bit of friction for checking my balance/transactions actually decreases my security!

      • jrm4 46 minutes ago
        As op, yeah I'm actually with you on this.

        And at the very least, 2FA should be a much more "openly open standard." Which is to say, just do TOTP everywhere, let people have their initial generating key and be done with it.

        I generate mine from my computer when I can, but I'm surrounded by all this magic that implies that something different is going on, e.g. the Duo system which I'm forced to use by my job and doesn't make this sort of thing easy, if possible at all.

      • throwaway132448 1 hour ago
        Unfortunately security theatre is viral, and nobody gets paid saying we should have less of it.
  • brna-2 12 hours ago
    This is an early experiment in human-computable TOTP. Not production crypto, but a serious attempt to reach reasonable security for plausible 2FA. Protocol revisions, criticism, and contributions are welcome.
    • ramon156 10 hours ago
      I don't really get what tone you're doing for. Is this "a serious attempt", or is this "something that does not guarantee any cryptographic security"?

      Nonetheless I do not see what issues 2FA has that this solves. Having the electronic device is the security. Without it there is no security.

      • leothetechguy 10 hours ago
        The security advantage I see in mtotp is that you never reveal the password to the system you are authenticating with, but that there is also no electronic device that can be compromised
  • pona-a 8 hours ago
    Yes! I've been thinking about a similar idea in October, using a "keyed hash" of the challenge computed with playing cards. I have no idea how secure this is, but the concept itself is exciting: the mental labor might function as a useful anti-coercion/fishing tool.
  • barbegal 12 hours ago
    An interesting idea but in theory just three correct pass codes and some brute force will reveal the secret key so you'd have to be very careful about only inputting the pass code to sites that you trust well.

    It's definitely computable on a piece of paper and reasonably secure against replay attacks.

    • MattPalmer1086 11 hours ago
      I was wondering about the overall security. How did you determine that 3 pass codes and brute force will reveal the secret key?
      • MattPalmer1086 11 hours ago
        Thinking about it, there are only 10 billion different keys and somewhat fewer sboxes.

        So given a single pass code and the login time, you can just compute all possible pass codes. Since more than one key could produce the same pass code, you would need 2 or 3 to narrow it down.

        In fact, you don't even need to know the login time really, even just knowing roughly when would only increase the space to search by a bit.

        • brna-2 10 hours ago
          Also @MattPalmer1086 the best solution for this I have now is to have several secret keys and rotate usage. Would be nice to have some additional security boosts.
          • MattPalmer1086 7 hours ago
            Key rotation among a set of keys only partially mitigates the issue (have to obtain more samples).

            It has it's own synch problems (can you be sure which key to use next and did the server update the same as you, or did the last request not get through?).

            This post on security stack exchange seems relevant.

            https://security.stackexchange.com/questions/150168/one-time...

        • brna-2 10 hours ago
          Yep known issue, was hoping someone could spice the protocol up without making it mentally to heavy, hn is full of smart playful people.
    • brna-2 11 hours ago
      Yep, I am aware, 2 or 3 OTP's and timestamps plus some brute forcing using the source-code. Server-side brute force by input should or could be implausible. But that is why I am signaling here that I would love a genius or a playful expert/enthusiast contributing a bit or two to it - or becoming a co-author.
      • i-con 10 hours ago
        I'm not an expert, but roughly know the numbers. Usually with password-based key derivation, one would increase resource needs (processor time, memory demand) to counter brute forcing. Not an option for a human brain, I guess.

        So the key would have to be longer. And random or a lot longer. Over 80 random bits is generally a good idea. That's roughly 24 decimal digits (random!). I guess about 16 alphanumerical characters would do to, again random. Or a very long passphrase.

        So either remember long, random strings or doing a lot more math. I think it's doable but really not convenient.

        • thfuran 10 hours ago
          A handful of words is generally more memorizable than the same number of bits as a random alphanumeric string. You wouldn’t need a very long pass phrase for 80 bits as long as you’re using a large dictionary.
  • vbarrielle 12 hours ago
    The idea is interesting, but I don't think this qualifies as a second factor, as it can be reduced to a factor you have to remember, so equivalent to a password. The second factor should be derived either from something you own, or something that can be obtained from biometry.
    • ulrikrasmussen 11 hours ago
      In that case nothing based on RFC 6838 would qualify as a second factor because nothing prevents you from just remembering the TOTP secret and compute the one-time code using a piece of JS. Or even putting it in your password manager.

      I think it is too simple to reduce the definition of second factor to how it is stored. It is rather a question of what you need to log in. For TOTP the client has the freedom to choose any of (not exhaustive):

      1. Remember password, put TOTP in an app on smartphone => Client has to remember password and be in possession of smartphone.

      2. Put password and TOTP in password manager => Client has to remember the master password to the password manager and be in possession of the device on which it runs. Technically, you have to be in possession of just the encrypted bits making up the password database, but it is still a second factor separate from the master password.

      • charcircuit 11 hours ago
        For proper 2nd factors the secret is a hardware key that practically can not be extracted so it is impossible for someone to know it. They must obtain the piece of hardware to use the key.
        • fc417fc802 10 hours ago
          Can't say I agree with this take. Sure, something hardware bound is more secure under certain threat models. For others it's largely irrelevant. There are also drawbacks, such as not being able to back it up. That might or might not matter. "Just" get a second hardware token, register that as well, and store it somewhere safe won't always be a realistic (or perhaps desirable) option for everyone in every scenario. It certainly reduces your flexibility.
          • charcircuit 10 hours ago
            If a factor is "something you own", it is by design that if you lose and no longer own it then you can't pass that check.
            • fc417fc802 10 hours ago
              Not true. There is no requirement that the user be incapable of cloning or recreating the possession. That's an additional constraint that some parties choose to impose for various reasons (some understandable, some BS).

              In the end it's all just hidden information. The question is the difficulty an attacker would face attempting to exfiltrate that information. Would he require physical access to the device? For how long? Etc.

              If the threat model is a stranger on the other side of an ocean using a leaked password to log in to my bank account but I use TOTP with a password manager (or even, god forbid, SMS codes) then the attack will be thwarted. However both of those (TOTP and SMS) are vulnerable to a number of threat models that a hardware token isn't.

              • crote 9 hours ago
                That's like saying "There is no requirement that the user doesn't tell their password to other people - all that matters is that the user remembers it".

                The "additional constraint" is the entire point. You can't get rid of it without seriously degrading your security.

                For example, a TOTP secret stored in a password manager will be leaked at the same time as the password itself when the password manager is compromised - which once again allows for impersonation by an overseas attacker.

                And when you're using a password manager a leak on the website side is not a real threat, as yours is unique per-website and contains enough randomness to not be guessable if its hash leaks.

                If anything, TOTP is the weaker factor here, as the website needs access to the raw TOTP secret to verify your code - which means a compromised website is likely going to mean its stored TOTP secrets are leaked in plaintext!

                • fc417fc802 8 hours ago
                  > That's like saying "There is no requirement that the user doesn't tell their password to other people - all that matters is that the user remembers it".

                  ... yes? I wholeheartedly agree with that statement so I'm really not sure what your point is. I have shared passwords with family members in the past. It works when it works and it doesn't when it doesn't.

                  > The "additional constraint" is the entire point.

                  I believe I already refuted that. Every practical implementation will have weaknesses. Being vulnerable to a greater number of attack vectors does not disqualify the method. All that matters is that the method works as intended for the attack vectors of interest.

                  I can bypass the lock on my front door by breaking a window. That doesn't mean that the thing on my front door doesn't qualify as a physical lock. It just means that my security model is vulnerable to certain attack vectors. That might or might not be a problem.

                  > You can't get rid of it without seriously degrading your security.

                  Whether or not my security is degraded depends on the extent to which the attack vectors the "additional constraint" was defending against are relevant to me. Writing my password on a post-it note and sticking it to my monitor degrades my security if the attack vector is someone breaking and entering my home. However it does not degrade my security even slightly if the only attack vector I care about is a stranger on a different continent illicitly logging into the associated account.

                  In general your thinking on this topic seems overly rigid and dogmatic. Security practices exist only to serve real world usecases. The expected attack vectors matter. So does user inconvenience. Something that is less secure but more convenient can often be the "more correct" solution in the real world. This is no different than how businesses will often choose to implement processes with well known flaws coupled with a response plan or insurance policy. For example shipped software often has bugs that were already known prior to release.

                  > a TOTP secret stored in a password manager will be leaked at the same time as the password itself when the password manager is compromised

                  Agreed. I went out of my way earlier to acknowledge the vulnerability to additional threat models.

                  > when you're using a password manager a leak on the website side is not a real threat

                  Well sure, but how are you going to get the vast majority of your users to use a password manager? They can always choose not to and there's approximately nothing you can do to reliably detect that.

                  You could mandate switching to a key based solution but then you'll get lots of complaints and maybe even lose customers. Or you could augment passwords with something else. TOTP is reasonable. So are SMS or email codes. Despite not being as secure or foolproof as a hardware token those solutions are sufficient for many scenarios.

        • ulrikrasmussen 10 hours ago
          Yes, that is certainly a more secure second factor since there are fewer ways for an attacker to steal it, but I don't think that should be a necessary condition for it to be called a second factor at all.
          • charcircuit 10 hours ago
            I'm specifically talking about the "something you own" second factor. There are other factors which could be used as a second factor.
      • fc417fc802 10 hours ago
        > I think it is too simple to reduce the definition of second factor to how it is stored.

        I think the defining characteristic is how it is used. I can use a password like a second factor, and I can use a TOTP code like a password. The service calls it a password or a second factor because that was the intention of the designer. But I can thwart those intentions if I so choose.

        Recall the macabre observation that for some third factor implementations the "something you are" can quickly be turned into "something your attacker has".

      • Perz1val 11 hours ago
        I put them in my password manager
  • wolvoleo 6 hours ago
    Interesting idea but I don't think my users will grok this :)

    The worst thing about it is that people will go like "uuuh naaaah" and will just grab a random app off the play store and put their code in it. Now you are leaking secrets to whatever random app they use.

  • eisbaw 11 hours ago
    or we could use asymmetric biometric fingerprints. Turns out features can be extracted into public and private sets, and both are required for a match. I hold a patent on it btw
    • crote 10 hours ago
      I remain very skeptical of fingerprints.

      They are both too mutable (cuts and burns will alter them) and not mutable enough (you can't re-roll your fingerprints after a leak).

      On top of that, you are also literally leaving them on everything you touch, making it trivial for anyone in your physical presence to steal them.

      They are probably pretty decent for police use, but I don't believe they are a good replacement for current tech when it comes to remote auth.

      • fc417fc802 9 hours ago
        Biometrics are "something you are" but they are not a good substitute for either "something you have" or "something you know".

        My concern with them nearly always comes down to privacy. They are far too easy to abuse for collecting and selling user data. There are probably ways around that but how much will you ever be able to trust an opaque black box that pinky promises to irreversibly and uniquely hash your biometric data? It's an issue of trust and transparency.

  • MattPalmer1086 11 hours ago
    What is the purpose of the 6th digit?

    It doesnt add any security, as it is trivially computable from the other digits already computed.

    It appears to be a checksum, but I can't see why one would be needed.

    • brna-2 11 hours ago
      I originally included it as a structural integrity digit, with the option for early rejection on the server side. That early exit check is not implemented in the current PAM module yet.

      This is an early POC, and sanity checks like this are exactly the kind of feedback I’m looking for.

    • gildenFish 11 hours ago
      It probably isn't for security, it is more likely a quick check that the code that you memorized makes sense.
  • onion2k 11 hours ago
    I don't think people plan what time to log into things.
    • brna-2 11 hours ago
      Yep, they did not need to when the calculation was done in real time on a mobile phone. :D
  • deafpolygon 11 hours ago
    I see 2FA is often misunderstood by people. The basic premise with 2FA is that you combine “something you know” with “something you have”.

    You are already part of the 2FA — you’re the first factor: “something you know”.

    The second factor: “something you have” — often a personal device, or an object. This is ideally something no one else can be in possession of at the same time as you are.

    • sigio 10 hours ago
      Except that for 99% of my passwords, I am 100% sure I do not, and never will, know them, they are 60-100+ bytes of random data, only known by my passwordmanager. The only thing I know, is the passphrase for my passwordmanager. TOTP codes are also stored in there, but I see it more as a replay-protection for captured passwords, though this is also really a non-issue in this time of almost no plaintext protocols.
  • swiftcoder 11 hours ago
    Isn't this just manually hashing a password with a timed-salt? I don't see how this relates to TOTP
    • ulrikrasmussen 11 hours ago
      TOTP is also just hashing a password with a time salt. The purpose is just to prove that you are in possession of the device that stores the password without actually ever entering the password anywhere where it can be leaked. In this case the device is just your brain.
      • swiftcoder 10 hours ago
        > In this case the device is just your brain

        And that makes it a password (i.e. the primary factor, not a second factor). The whole point of a second factor is that it's not trivially cloneable (hence why, for example, SMS is a poor form of 2FA in the presence of widespread SIM cloning attacks).

        • ulrikrasmussen 10 hours ago
          No, the defining characteristic of a password is also how it is used: it is communicated in the clear to the verifier, thus revealing it to eavesdroppers. It is highly non-trivial to clone the knowledge in someone's brain if they never openly communicate the mTOTP secret but only do the computations in their head.
          • swiftcoder 7 hours ago
            > No, the defining characteristic of a password is also how it is used: it is communicated in the clear to the verifier

            This is only true if the verifier lives on your local terminal - otherwise we use an encrypted channel to transmit to the verifier, or do the exactly same type of timed-salted-hash scheme used here to transmit without revealing the password.

            • ulrikrasmussen 6 hours ago
              The thing is that you are sometimes tricked into giving the password to someone who is posing as the verifier.
          • crote 8 hours ago
            Not true. There are lots of authentication schemes where the plaintext password is never communicated. This becomes rather crucial when the client doesn't know for sure yet what the identity of the other side is. See for example wifi encryption.

            Cloning the knowledge in someone's brain is fairly easy. You just need a wrench.

            • ulrikrasmussen 6 hours ago
              Yes, but that is not how passwords work since the protocol for proving knowledge is that you enter it into the HTML form served by the party claiming to be the verifier.

              If we are talking rubber-hose cryptography then a physical hardware token is just an insecure as a brain. Most people are not hacked via wrenches.

  • cuckovic 11 hours ago
    Really nice idea