I’m looking into implementing CORS (again, it seems like this is something that comes up every few years, and every few years I have to re-orient myself about how it all works), and as always it’s so confusing. (Here I’m talking about Access-Control-Allow-Origin type stuff, primarily, as CORS was initially a structured way to relax the same-origin policy on requests. I’m not as familiar or concerned with some of the newer headers for mitigating Spectre-type attacks. Should I be?)
Any CORS experts out there with “best practice” recommendations? The security and threat model is so counterintuitive.
Is the whole point of the CORS model basically to handle the browser’s decision to send cookies on every request? If the browser just refused to send cookies by default on non-same-origin requests and prompted the user to “Allow Once” or “Allow Always” like it does for saving passwords, wouldn’t that also solve the problem (and not to mention CSRF as well, which CORS doesn’t address).
The server needs to handle arbitrary traffic from arbitrary clients, so resources should be protected appropriately. The only thing particularly unique about the browser is that it chooses to send cookie credentials, possibly against the user’s intentions.
With all that in mind, it seems like these are maybe best practices (somewhat counterintuitively):
When possible always set Access-Control-Allow-Origin: *
. Everywhere online seems to recommend not including the header, if it’s not necessary, or being as specific as possible with the origins you allow and validating against a regex or an allowlist. But, since ACAO * does not allow credentials, then that’s actually safer, right? And if your backend has to expect traffic from, say, curl
, or whatever, then you might as well acknowledge that fact fundamentally and say arbitrary JS scripts out there can also hit the endpoint (as long as, similarly to curl, they don’t include a cookie). Is there a downside to this approach?
Access-Control-Allow-Credentials: true
- this is the truly dangerous one, since the whole threat model of CORS is about a malicious website sending an authenticated request to your server without the user’s consent. So in this case, you do need to carefully set ACAO to specifically the origin that your own real site is at.
What should you do about CORP, COEP, etc - all the new headers?
When it comes to CORS, the winning move is not to play. Why do you need separate origins? Have one URL which proxies to two backends and then you don’t need CORS. I don’t understand your domain, so maybe you can’t do that, but figuring out if you can avoid CORS is always step 1.
In terms of cookies, most of the time you can use https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite which mitigates CSRF for users with modern browsers.
[Comment removed by author]
I think it’s very important to highlight that CORS, like all the security-oriented features added to HTTP and browsers lately, are not 100% reliable, in the sense that just as one has to do server-side validation of inputs, so one has to do server-side authentication / authorization / sanitization of requests that “seem” to come from the browser.
The main use-case of CORS and related is to stop purely browser-based attacks, mainly cross-site-scripting and similar, that previously were solved in all kind of awkward manners (like CSRF tokens, etc.). Thus outside of a browser, anyone with an HTTP client can forge any request that says it comes from any origin.
I’ve recently configured some CORS stuff and also had to explain it to a few people, so this post tells me maybe there’s an opportunity to write something. When I went through the pain of learning how it works the first time I usually just went through specs and mdn docs, I rarely found a blog post that did a good job explaining it.
carlmjohnson is correct about just avoiding it altogether. If you can simply proxy based on an “/api” path at the top level, do that instead of using an “api.xyz.com” subdomain.
No, it’s also to prevent particular cross origin requests from happening, which could trigger unwanted behaviors on other services.
I hate CORS for the reason it stiffles “creativity of connecting internet things”. Unfortunately if you’re targeting a browser, there’s no reasonable way around it. If you want your service to be accessible from everywhere then yes, set the * origin. If your service requires cookies then yes, allow-credentials true. Allow all headers, etc.
It’s why lately all my latest cool shit is not browser client side, but either completely server side or desktop.
If you’re asking this question, that means you haven’t understood the idea behind CORS. :-) As far as I am aware, nothing aside from a browser even gives the CORS headers a second glance. They don’t do anything in and of themselves; they are purely instructions to a browser to say “here’s how these sites can interact”.
“A way to relax the same-origin policy” is indeed correct. The thing to keep in mind though is that the SOP is purely a concept in browser-land. Nothing else cares about it.
Besides avoiding CORS altogether, there is a way to cache/bypass CORS. However that opens up a cache poisoning attack in some cases, and it’s per-endpoint which sucks.
Can someone think through this for me? Why can’t a root TLD (say, example.com) just have an SPF type record that lists all the subdomains that should be treated as the same origin e.g. a.example.com, b.example.com?
I think HTTP-related technologies like to be self-contained, thus one reason why DNS isn’t used much is because everything HTTP-related should be expressed in terms of HTTP-only constructs (i.e. headers mostly). (With the exception of the server name resolution).
Another reason why DNS isn’t used is due to limitations: with the current CORS approach, the server can receive requests from arbitrary origins, but based on the
Origin
header it can dynamically choose if to allow or not that request. With DNS this feature wouldn’t be possible.Yet another reason is simplicity: one can deploy a modern (i.e. “complex”) wed application to any domain by just changing a single
CNAME
orA
record, but leaving everything otherwise unchanged. Thus one could even host such an web application on a static site provider.Those are very good points, especially around portability. Speaking just for myself though, CORS is such a pain in the ass, if I could add some one-time DNS setup to get rid of it forever I would in a heartbeat. I don’t know how common this is but while I’m developing an API I get false-positive CORS errors from the browser all the time. It creates noise in the network tab (manually filtering is also a PITA) and slowness for users.
Any solution is going to have trade-offs of course.
You should also carefully select which paths return this header, and not apply it to the whole domain.
In my experience, the header-land moves faster than application-land, so it is always a “catch-up” game. The overall security of a web-application should not depend only on the availability (or the lack of) a particular header. Especially because these headers are merely guidelines/recommendations to browser agents, and there is no guarantee browsers will honor any of them.
Before CORS, there was just the same-origin policy, introduced by Netscape 2.02 in 1995. CORS gives the server a way to allow requests which would otherwise have been denied by the same-origin policy. If a browser doesn’t support CORS, it will just block requests which violate the same-origin policy, and no security is lost.
But the weird part (to me) is that the browsers don’t block the requests (for requests that do not need to be preflighted), they actually send them to the server, they just don’t give scripts access to the response. That feels like the worst of both worlds.