7

I’m interested in using public-key cryptography for stateless authentication on websites.

The current authentication standard is email + password. Passwords are bad because they can often be guessed, forgotten, stolen, divulged etc. Passkeys are better because they rely on public-key cryptography and no one can (realistically) guess your private key.

But AFAIK, passkeys only use public-key cryptography during login. Afterwards, the server sets a cookie or returns a bearer token or something along those lines. Those can still be stolen.

Why not go all the way: imagine a website that lets you sign up just by submitting your public key. It can extract your name and email. (I’m told some sites take this approach during signup, but then they revert to traditional cookies or bearer tokens for subsequent HTTP requests.) Once signed up, you include a detached signature in the Authorization header of any subsequent HTTP request of your session. It could be, say, a signature of the request body. The server verifies the signature and knows that it must have come from you, thus authenticating you. Assuming your clock is synced, you could include a timestamp in the signed text for the server to minimize room for replay attacks.

This approach is fully stateless, password-less, and remains secure even if requests are intercepted in plain text. Signature verification is cheap and fast. Public-key cryptography is battle-hardened technology – much more so than plain-old passwords. The approach also puts session management completely in the control of end users since all they need to do to ‘log out’ is stop signing their requests. And to switch profiles, they just switch to a different keypair.

Features built into public-key cryptography such as key revocation and rotation further increase security (eg to disable or update an account). End-to-end encryption is available out of the box if desired. (HTTPS connections are already encrypted, but it’s a bonus for added security – Heartbleed comes to mind, or maybe it’s a useful fallback when something goes wrong with the server’s certificate.)

To generate and use key pairs, I’m picturing a browser extension that interfaces with a native app. This app would call out to gpg to generate a key pair for a given domain, share the resulting public key with the browser, and sign subsequent requests to that domain automatically. The private key would never touch the browser. Access to the native app could be guarded with biometrics, a similar approach to passkeys.

I want to know what challenges this approach may face in terms of security. Also, surely I can’t be the first person to think of this approach. At the risk of embarrassing myself: I’m an experienced full-stack web developer but a cryptography noob.

I’m guessing attackers would be most interested in trying to gain access to the native app since it holds private keys.

1
  • 3
    What you are describing sounds a lot like TLS token binding. Commented May 6 at 23:04

3 Answers 3

16

Your goals are largely implemented in TLS with client certificates (also known as “mutual TLS”). Both the client and the server have a key pair and a corresponding certificate – though it's also possible to exchange raw public keys or use dummy certificates if you don't care about certificate authorities. After each party has authenticated itself, the subsequent network traffic (e.g., HTTP requests and responses) is encrypted and integrity-protected. It's not necessary to implement application-level sessions, because TLS has its own session resumption mechanism. It's also possible for the client to keep the private key away from the browser by storing it on a smartcard or some other hardware token.

Signing individual HTTP requests as you propose is rather inefficient. The common solution in a network scenario is to perform an (Elliptic-Curve) Diffie-Hellman Key Exchange and establish a shared secret. This can then be used to derive keys for symmetric algorithms and message authentication codes (the symmetric equivalent of digital signatures) for the subsequent bulk data. So public-key cryptography is typically only used in the setup phase. Note that GPG is intended for a completely different scenario where offline signatures are required.

If you use a static public key for encryption, there's another problem: An attacker can record encrypted data, and if they ever manage to obtain the corresponding private key (maybe long after the legitimate user has abandoned it), they can go back and decrypt the data. Using the (Elliptic-Curve) Diffie-Hellman key exchange with ephemeral keys fixes this as well, because it provides forward secrecy.

And of course the usual warning applies here: Don't roll your own. It's extremely difficult to design secure protocols, because there are countless subtle mistakes you can make. So this is all fine as a thought experiment, but don't use home-grown protocols in production.

Also note that both in your scheme and in mutual TLS, attackers may still be able to forge HTTP requests using CSRF. The underlying issue is that it's impractical for the user to inspect every HTTP request before sending it to the server. They have to rely on their browser, which can enable attackers to trigger requests that the user never meant to send.

17
  • Thanks. For kicks, I ran a benchmark test. It takes 0.13 seconds to sign the request my browser made to this very page, and 0.025 seconds to verify the signature. A total of 0.155 seconds. At first, that didn’t seem like huge overhead to me, but added to each of the billions of requests sent every day, yeah, that is a lot of extra time. Commented May 7 at 0:50
  • "dummy certificates": presumably this refers to certificates not signed by a CA which are otherwise exactly the same as a signed cert. Commented May 7 at 17:44
  • @JimmyJamessupportsCanada: I'm referring to any certificate where the signature is ignored, e.g., a self-signed certificate. There's an RFC for raw public keys in TLS which solves this in a cleaner way, but it's not widely supported. Commented May 7 at 20:18
  • "I'm referring to any certificate where the signature is ignored, e.g., a self-signed certificate." That definition includes CA root certs, no? Commented May 7 at 20:41
  • @JimmyJamessupportsCanada: Yes, but I'm talking about the certificate which the client or server presents for itself. There's no reason for those to be CA certificates. My point is: TLS also works without CAs (if trust is established in some other way), either by using the referenced RFC for raw public keys or by using standard certificates but ignoring everything except for the public key. Commented May 8 at 6:07
3

Afterwards, the server sets a cookie or returns a bearer token or something along those lines. Those can still be stolen.

Not if your site supports TLS.

This approach is fully stateless, password-less, and remains secure even if requests are intercepted in plain text.

Not if not using TLS. A MitM attack can replace your public key with the attacker's key, intercepting all communication between you and the server.


What you propose already exists, it's called Client Certificate. It ensures the server knows who the client is, without the need for a password.

Using an extension and another application to manage another security protocol will not be easily adopted. Password managers are amazing for security, creates unguessable passwords, store everything on encrypted storage, and even so most users don't use them.

10
  • “Not if your site supports TLS.” I believe XSS attacks could still steal cookies. “A MitM attack can replace your public key with the attacker's key, intercepting all communication between you and the server.” That doesn’t sound necessary if the communication is all plain text already. What I meant was, the attacker couldn’t sign requests on my behalf. Or are you talking about something else? Commented May 6 at 23:10
  • 1
    The attacker can sign in your behalf. Without TLS, you cannot be sure you are talking with the server or the attacker. So the attacker can intercept the traffic, strip your key and sign with theirs. Commented May 7 at 0:11
  • I see what you mean. And I’m guessing TLS is not itself vulnerable to such impersonation because a certificate authority vouches for the server. Commented May 7 at 0:30
  • Yes. If banks, governments and crypto exchanges can rely pretty much solely on TLS, so do you. Commented May 7 at 0:40
  • Note, that corporations and governments inject their own CA into OS and browsers to bypass TLS trust chains, so rolling your own trust store makes sense. In particular, client certificates are almost always signed with a private CA. Commented May 9 at 8:55
1

You can also look into demonstrating proof of possession, an oauth extension that lets you bind access tokens to private keys. So whenever you want to use an access token, you have to proof that you hold the corresponding key.

You can create and store a non exportable key using the web crypto API, so there is no need for an extension.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.