1. 9

I’ve been creating an end to end encrypted chat website with an open source frontend but balancing load balancing and trust is a bit of an issue. Right now I’m hesitant to trust CDNs. Code running on websites typically changes as soon as someone pushes new version to the server. Most times this is the website owner but:

  1. What if the servers are compromised?
  2. What if the CDN is compromised?
  3. What if one of the developer’s machines is compromised?

So I thought what if website releases were pushed to a git repo for an extension to verify against a hash or something. I don’t know of any code signing that works with website resources. Has anyone else done anything with this?

Feel free to flag this to oblivion if it’s a stupid question.

  1. 9

    Part of the solution could be Subresource Integrity. You can include a content hash in each <script> or <link> tag, which the browser checks.

    That only works if the <script> tag itself hasn’t been tampered with though.

    (edit) Another thought: since this is an app, the HTML probably doesn’t do much besides load the JS. Maybe all the HTML you have is <script src="main.js" integrity=... >. If you wrap that up in a data URL and put it in the git repo, then if a user starts from the git repo and click the URL, they know they’re getting the right HTML because it’s actually embedded in the link (assuming they trust the contents of the git repo).

    1. 1

      I did not know about that standard, thx.

      So if this were to be used, that means the index.html that would ideally come directly from the site itself needs to be trusted. I still feel like something needs to be separate from the site in order to verify the site’s contents or maybe the index.html is signed by the owner with the signature appended, but it still requires the public key to be held onto somewhere for validation outside of the retrieved index.html

      1. 5

        Hi, co-author of the SRI spec here. I have toyed with an approach that has a bootstrap-index.html page as a standalone, downloadable and widely hostable page. The page would load all of its assets through a metadata file that can be centrally hosted and versioned. Signatures would have to come yourself (web crypto? ugh). But the metadata could contain integrity metadata and you could enforce those to be in place for all subresources through a ServiceWorker.

        Even though SRI is only supported in html syntax for scripts and styles, every fetch() can bear the integrity metadata. This means that you can add them on-the-fly through a ServiceWorker and have the browser verify it for you. This is a bit brittle and you have to include an escape hatch for when ServiceWorkers are disabled or the website is force-reloaded (which bypasses the ServiceWorker), but it is absolutely doable. So doable, that I have in fact started to implement it three times (though all incomplete 😬).

        This might be some interesting reading non the less: https://github.com/mozfreddyb/serviceworker-sri and https://github.com/freddyb/sri-worker and https://github.com/freddyb/sri-boot.

        I also created a tiny serviceworker-security challenge in 2017 where I provided a website that uses ServiceWorkers for a similar security feature to find out exactly how brittle it is. It was insightful in itself, but my experiment was a bit ill-defined and is probably worth running again. The results on using service workers for security are on my blog.

        1. 1

          That’s what TLS is supposed to solve. Of course, nowadays it has become much easier to obtain a signed certificate for any domain (e.g. by CA compromise) that this isn’t the foolproof solution it was [intended to] be.

          1. 1

            There are also certificate/key pinning. But HPKP died: hhttps://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning

            In the end, you have a technology mismatch here. e2e security essentially relies on the endpoints being secure and protecting the connection between them. Any kind of updates for the software jeopardizes the secure endpoint.

      2. 3

        The skeptic’s answer:

        The attack surface of browsers is almost uncomprehendingly massive for a single person already and is still actively growing. Most of the security put into browsers are hacks, or evil hacks.

        The easiest answer is TLS 1.3 everything and Content Security Policy(CSP) and deny all javascript from ever running. This of course is a non-starter for most web apps(and browser vendors don’t help any here, requiring JS for so many things these days), so then you get stuck in JS.

        What you are proposing are even more hacks, to somehow make it “secure” except it probably never will be, the browser makers seem to only barely care. The status quo of generally terrible browser security is likely here to stay.

        The next “best” thing is ALWAYS deliver all of the code yourself, and setup your CSP such that all of your code is from servers under your control and setup Subresource Integrity, like @dpercy mentions. and get very very familiar with all the various security headers and make use of all of them you can, and monitor new ones that will show up tomorrow. In my experience it’s a complicated mess.

        You will of course not be able to do that, because you will need some 3rd party JS to run for some feature or other, so you are stuck doing iframe hacks and other stuff. Like I said, it’s generally a mess. Give up, accept that browsers will never be provably secure, will never be usefully secure and assume everyone in HTTP is out to get you(i.e. all of your server/rest API, etc is as secure as you can possibly make it(bare minimum: validate all inputs), client side web security is generally a disaster.

        If your server is compromised, it doesn’t matter what you do, you have already lost.

        If your CDN is compromised, you are broken, but hopefully you avoided the CDN all together if possible, but subresource integrity helps here.

        But really the CDN isn’t the worry point, it’s 3rd party JS you include(or the browser includes for you, via extensions, etc) that will ruin your day as you have essentially zero control over what’s delivered by 3rd party’s or the users browser. Avoid 3rd party JS like the plague if possible, except nobody, including banks and other things where security is supposed to matter, bother. Also, subresource inegrity won’t help you here, because the 3rd parties likely won’t bother and will have a common endpoint where they push code changes ALL the time and never bother to tell you, so your websites will constantly not load(because the hashes won’t match).

        If a developers machine is compromised, you have likely already lost, but there is some things you can do, like require all commits to production be merged by another person(so no single person can push directly to the production repo), lots of security scanning/linting, etc.

        1. 3

          I think you’re looking for Signed HTTP Exchanges: https://developers.google.com/web/updates/2018/11/signed-exchanges

          But it’s controversial, Mozilla had some reservations. I haven’t digged into the details of that discussion though.

          1. 4

            Signed HTTP Exchanges (SHE) is much, much more. My understanding is that this is an intent to bake something like AMP into web standards. What I find most worrisome here is that it allows to make an origin act on behalf of another origin without the browser ever checking the actual source. Essentially, this means amp.foo.example could - for all intents and purposes of web security and the same origin policy - speak for my-totally-other-site.example. This also removes the confidentiality you could have with the origin server and inserts a middle-man, which you wouldn’t have if you talked to the origin server directly. Mozilla openly considers Signed HTTP Exhanges as harmful.

            That being said, a solution for bundling that supports integrity and versioned assets would be very much welcomed though!

          2. 2

            Code signing only works when the 3rd-party signing authority can be trusted. On the web, there’s no central authority like that in which anyone can trust, and I think that’s the main issue behind why code signing for the web doesn’t exist (yet). For applications running on Apple or Microsoft’s platform(s), the solution is somewhat trivial because we all know who the central trusting authority is, and if they screw something up they are held accountable. But without that, aren’t you just moving the problem? Now I have to ensure that not only the site I’m viewing can be trusted, but also the arbitrary other site that verifies its content. What’s to stop someone from just injecting their own 3rd-party trust authority into the mix?

            This is a good idea and I think it’s worth exploring, by the way. I just don’t know how you’re going to solve for the fact that the web is decentralized.

            1. 1

              These are some of the problems that web packaging aims to solve: https://github.com/WICG/webpackage

              1. 1

                Subresource integrity, CSP and others are really good answers for real world production systems now. If you’d like to play with future possibilities, there’s IPFS where the url is the hash of the resource, so that allows validation on a completely separate layer. Won’t work without relying on a third-party or without an extension though.