1. 41
  1.  

  2. 11

    “a lab full of Windows XP machines limited to IE8, for example. Or on even older machines, running Windows 95 or other operating systems of that era.”

    One of the ways we countered that problem was forcing folks to upgrade by dropping support for older stuff. Catering to obsolete systems usually increases the dependency. The work for that maintenance also takes time that could do things like improve the software, create more content, etc. So, I’d say we should find more ways to bring them up to date with FOSS or even make something for such people.

    1. 4

      It would appear that they are operating with severe resource constraint and at a guess the cost of moving to something more modern could be more than the average person in that country earns in a year.

      With that said, moving to FOSS would be a huge benefit and there are maintained operating systems freely available that will run on equipment rated obsolete by our standards; I guess the issue in that case is the bandwidth and time it would take to download them and having someone around who can install and train people on how to use it.

    2. 11

      I don’t claim to understand the nitty gritty of HTTPS but I feel like it should be possible to cache HTTPS requests by having clients recognize the caching server as a certificate authority. My employer has managed to snoop on my HTTPS traffic somehow and I assume it has to do with the root CA my installed on my workstation that no one outside the organization has ever heard of. I’m guessing the proxy is generating a phony certificate for my target domain and then signing it itself while using publicly acknowledged CAs on the other side. I can’t say I enjoy being MitM’d by my employer but if there’s a use case that justifies it, the situation described in the article seems to be it.

      Is this not actually what’s happening? Is it impossible to do given the resource constraints of rural Ugandan computing?

      1. 11

        What you’ve described is right - it’s not uncommon for companies to snoop on HTTPS traffic by installing their own CA on employees machines. I don’t think it’s a good practice, but it does work in cases where companies insist on doing so, and there’s no reason it shouldn’t work for a caching server.

      2. 6

        Nice article. Also interesting for reasoning about DNS-over-HTTPS.

        As far as I can say from my experience in Kenya, it should also be noted that Africans have a very different way of perceiving time. And security. And… everything! :-D

        How would I address this issue?

        I think that I would basically create a reverse proxy serving over HTTP those sites that could benefit more from caching (eg Wikipedia). Probably with a custom domain such as wikipedia.cached.local so that people could not be fooled to take the proxied for the original. Rewriting URIs for hypertexts shouldn’t be an issue, but it could be harder to Ajax pages. Probably I would also create a control page so that a page could be prefetched or updated. With a custom protocol and a server in Europe, one could also prefetch several contents at once and send them back together, maximizing bandwidth usage.

        Obviously it wouldn’t be safe, but it would be visibly unsafe, and limited to those website that can get advantage of such caches without creating serious threats.

        As for service workers, I do not think they would improve the user experience at all, since they are local to the browser and the browser has a cache anyway. The problem is to share such cache between different machines.

        1. 2

          Local reverse proxy is a clever idea, and a proxy that you explicitly set up clients trust a la corporate middleboxes (see Lanny’s comment) seems like it can work in some environments too. Sympathetic to the problem of existing solutions no longer working, sort of surprised the original blog post wasn’t more about how to improve things now.

          1. 5

            The point of the machinery I described was to make user explicitly choose between security and access time.

            You can make everything smoother (and easier to implement) with a local CA or by installing proper fake certificates in the clients and transparent proxy, but then people cannot easily opt-out.
            Worse: they might be trusting the wrong people without any benefit, as for sensible pages that cannot be cached (shopping carts, online banking and similar…)

            That why using the reverse proxy should be opt-in, not default and trivial to opt out: there’s no need for a proxy if you want to edit a wikipedia page!

            Sympathetic to the problem of existing solutions no longer working, sort of surprised the original blog post wasn’t more about how to improve things now.

            Eric Meyer is a legend of HTML, CSS and Web accessibility. A legend, beyond any doubt.
            Before HTML5 I used to read his website daily. He teached me a lot.

            But he is a client-side guy.
            I think his reference to service workers is an attempt to improve things now.

            1. 2

              Nit: the past tense of teach is taught, not teached.

              1. 1

                I’m sorry… I can’t edit it anymore, but thanks!

        2. 1

          Is there any particular reason that HTTPS websites can’t be cached? You would need a specific HTTPS website caching protocol I guess, but imagine this process:

          • I type https://en.wikipedia.org/wiki/New_Zealand into my URL bar and press enter
          • My browser asks for a recent copy of /wiki/New_Zealand on en.wikipedia.org
          • The cache responds with a recent copy, signed with en.wikipedia.org’s private key.

          Sure, you don’t have privacy this way, but you at least get authenticity. Websites could mark specific pages as ‘privacy-irrelevant’ or something, to allow them to be cached. My browser would need to at least go to wikipedia once, to see whether the website I am asking for is privacy-irrelevant. But after that, with a ‘as long as you have this particular cookie signifying you are logged out, everything under /wiki is privacy-irrelevant’ record, this would be fine, right?