The Public Suffix List has always been an unscaleable cludge. It’s currently necessary, and the people who volunteer their time for it are heroes, but… man, I wish there was something better.
This change by curl is probably a good idea, but who knows if people are actually going to keep their PSL up-to-date. I actually wonder how many installations don’t keep it up-to-date because curl autodetected libpsl.
At least HSTS is just an improvement. It just preloads the information that would have come from the origin itself. It doesn’t change the behaviour of the platform beyond the first visit to the site (when preloaded will automatically redirect to HTTPS).
How could this be done better? A field in the DNS maybe? (For instance the client would need to make a DNS request to co.uk to check a TXT record that tells if it is a public suffix… but DNS can be attacked)
Yeah, I thought of a field in the DNS too. That would solve the cookie problem at least. It would also possibly break some use cases of the PSL.
As for attacks, note that the basis for the PSL is DNS anyway. The PSL is not a security mechanism. Edit: see also DBOUND which I know nothing about (it’s just mentioned a few times in PSL documentation).
The “correct” solution is probably just making cookies strictly per-origin rather than having domain hierarchy inheritance. If you want to share cookies within a domain hierarchy you can redirect the user around a bit it whatever.
Of course that is a breaking change so is unlikely to happen.
I wonder how easy it would be to introduce a simple protocol for checking authorisation of cookie scope. If each domain name that you own publishes a public key in a TXT record then you may set cookies at that scope if you present your subdomain name signed with the accompanying private key (possibly with a DKIM-like selector so that you roll over keys easily). This avoids the need for any centralised authority. The registrar for .co.uk simply does not set this record (and, with DNSSEC, has a verifiable NXDOMAIN response) and so no one can set cookies at that scope. If I own example.co.uk, I create a key pair, sign the names of my subdomains, delete the private key, put the public key in DNS and then embed the signed subdomains in the headers for each subdomain. If I add more subdomains, I do the same thing again with a new selector. There’s little risk of key compromise because the private key is ephemeral in this use case, but if I give a third party access to a subdomain then I remove the DNS record for the key and reissue my keys with a new one.
I think it would be better to keep this at the HTTP layer rather than DNS for security and simplicity. We can also keep backwards compat with the current allow-if-not-in-PSL system. Something like this:
HTTP servers can respond with a cookie policy at e.g. .well-known/cookie-policy, or it can be a header or something.
If a cookie-policy exists for a domain then it whitelists subdomains that are permitted to set cookies on the current domain (or maybe there’s a public key like you said, but I don’t see a good reason for that)
When a browser is instructed to set a cookie on a superdomain it checks the cookie-policy of the superdomain, falling back to current behaviour if it is not found.
Why? The lookup needs to happen when you set cookies. The first time you visit a site, you don’t need it. When this site sets cookies, then you do the asynchronous lookup and store, with the cookie, which domains it’s valid for (and maybe a TTL, to let you do revocation). You never need to do the lookup on a latency-sensitive path.
They give you a cookie, but the cookie is always valid for that site. The check is needed to determine which other domains are allowed to receive the cookie.
What is the security implication of doing so? If I set a cookie for .com in my page, but that cookie is only available to JavaScript running on that page and is discarded and never sent in subsequent requests as soon as the browser determines that it’s invalid, what’s the problem? It introduces a communication channel between the web server and the JavaScript served by that web server. Many of those exist already.
If you set a cookie for .com in your HTTP response for a document and then subsequently load resources off of other *.com domains in the same document, the browser ought to know.
The Public Suffix List has always been an unscaleable cludge. It’s currently necessary, and the people who volunteer their time for it are heroes, but… man, I wish there was something better.
This change by curl is probably a good idea, but who knows if people are actually going to keep their PSL up-to-date. I actually wonder how many installations don’t keep it up-to-date because curl autodetected libpsl.
I agree. It’s weird to me that we just ship a list of these things with every browser.
HSTS preload list feels similarly weird to me.
At least HSTS is just an improvement. It just preloads the information that would have come from the origin itself. It doesn’t change the behaviour of the platform beyond the first visit to the site (when preloaded will automatically redirect to HTTPS).
How could this be done better? A field in the DNS maybe? (For instance the client would need to make a DNS request to
co.ukto check a TXT record that tells if it is a public suffix… but DNS can be attacked)Yeah, I thought of a field in the DNS too. That would solve the cookie problem at least. It would also possibly break some use cases of the PSL.
As for attacks, note that the basis for the PSL is DNS anyway. The PSL is not a security mechanism. Edit: see also DBOUND which I know nothing about (it’s just mentioned a few times in PSL documentation).
The “correct” solution is probably just making cookies strictly per-origin rather than having domain hierarchy inheritance. If you want to share cookies within a domain hierarchy you can redirect the user around a bit it whatever.
Of course that is a breaking change so is unlikely to happen.
I wonder how easy it would be to introduce a simple protocol for checking authorisation of cookie scope. If each domain name that you own publishes a public key in a TXT record then you may set cookies at that scope if you present your subdomain name signed with the accompanying private key (possibly with a DKIM-like selector so that you roll over keys easily). This avoids the need for any centralised authority. The registrar for .co.uk simply does not set this record (and, with DNSSEC, has a verifiable NXDOMAIN response) and so no one can set cookies at that scope. If I own example.co.uk, I create a key pair, sign the names of my subdomains, delete the private key, put the public key in DNS and then embed the signed subdomains in the headers for each subdomain. If I add more subdomains, I do the same thing again with a new selector. There’s little risk of key compromise because the private key is ephemeral in this use case, but if I give a third party access to a subdomain then I remove the DNS record for the key and reissue my keys with a new one.
I think it would be better to keep this at the HTTP layer rather than DNS for security and simplicity. We can also keep backwards compat with the current allow-if-not-in-PSL system. Something like this:
The thing is that browsers need a list. An asynchronous request before page load won’t work. It needs to be an instant lookup or cookies are broken.
Why? The lookup needs to happen when you set cookies. The first time you visit a site, you don’t need it. When this site sets cookies, then you do the asynchronous lookup and store, with the cookie, which domains it’s valid for (and maybe a TTL, to let you do revocation). You never need to do the lookup on a latency-sensitive path.
Many sites give you a cookie upon the first request. You need to parse it correctly before rendering the document.
I don’t think any browser would find it acceptable to pause any request and have it wait for a second query (over DNS).
They give you a cookie, but the cookie is always valid for that site. The check is needed to determine which other domains are allowed to receive the cookie.
No. A browser should not accept a cookie for an effective top level domain (e.g.
.comor.github.io). It *needs *to check :)What is the security implication of doing so? If I set a cookie for .com in my page, but that cookie is only available to JavaScript running on that page and is discarded and never sent in subsequent requests as soon as the browser determines that it’s invalid, what’s the problem? It introduces a communication channel between the web server and the JavaScript served by that web server. Many of those exist already.
If you set a cookie for
.comin your HTTP response for a document and then subsequently load resources off of other*.comdomains in the same document, the browser ought to know.