1. 13

  2. 14
    1. Google is going to warn people about my site being “not secure.”
    2. Something bad could happen to my pages in transit from a HTTP server to the user’s web browser.
    3. It’s not hard to convert to HTTPS and it doesn’t cost a lot.
    1. Sure, both Google Chrome and Firefox (to an extent) will start warning about this.
    2. I’ll come back to this.
    3. It’s not. There’s a bot that will handle it for pretty much all major OS/server combos.

    Let’s look into 2 a bit more. Verizon and Comcast both inject ads into unencrypted traffic. Other companies then piggyback off their tracking headers for more ads. China injected malware into Baidu’s unencrypted JS to DDOS Github. Several nations are trying to slurp up all the unencrypted data they can. Yeah, I’m OK with raising the bar slightly to keep that all from happening. HTTPS isn’t hard. Certbot and Let’s Encrypt has lowered the bar significantly.


    1. 7

      I would disagree with the article in part atleast.

      I do agree that what google is doing is somewhat dangerous, though not directly related to HTTPS only.

      I don’t agree that labelling HTTP only as “not secure” is going to destroy the old web. You can still visit those websites. The browsers do not block you from using them.

      Most users probably won’t visit websites outside facebook, youtube and maybe google search. Having non-https labelled as nonsecure would benefit these users (and reduce the number of calls I get about “I signed into paypul.to and my money is gone”).

      1. 4

        Its really easy to write a simple HTTP server. I’ve done it a couple of times for weird languages like Self. You can do it in an afternoon and you end up with something which is fast enough and usable enough for simple websites with low traffic and all you need is to be able to talk to your OS to open a socket, read, write and close it.

        There is no way that I’m going to write a HTTPS server. Which is a shame because it means that small language projects like Self will need to rely on a bunch of third party C code and third party infrastructure (letsencrypt, caddy, openssl etc) to serve a simple website.

        1. 8

          Any HTTP Server can be easily converted into a HTTPS server by piping it through a SSL proxy. It’s the same protocol going over the pipe, after encryption there is no difference.

          1. 1

            Sure, that’s what letsencrypt, caddy, openssl, etc provide you: a way to turn your simple HTTP server into a public facing HTTPS server. But the cost is that a small protocol which could be completely written in house for fun now needs a whole bunch of complicated C/Go/etc code and systems written and hosted by someone else…

            1. 10

              At the risk of being overly reductive you’re already depending on a bunch of code someone else wrote unless your HTTP implementation also included the TCP and IP layers. Adding in TLS can be thought of as just inserting one more layer to the several that already exist beneath you.

              (A complicated one you might need to configure and that isn’t provided by the OS, I grant you)

              1. 3

                I was reading your post and wondering if the size difference between a standard kernel-space TCP implementation and openssl was negligible or not.

                find linux/net/ipv*/tcp_* -name '*.c' -o -name '*.h' | xargs cat | wc -l
                find openssl/ssl openssl/crypto -name '*.c' -o -name '*.h' | xargs cat | wc -l

                Turns out t’s an 11.3 ratio, it is not negligible at all.

                I was actually not expecting a difference that big!

                [edit]: I just re-read my comment, do not interpret that as an attack, it is not :) You just itched my curiosity here!

                1. 2

                  On top of that, if we compromise performance for code size by dumping optional parts of the spec, we can get a minimal functional TCP stack in an amazingly small amount of code (cf uIP, lwIP, and the fabulous VPRI parser based approach in Appendix E of http://www.vpri.org/pdf/tr2007008_steps.pdf)

                  1. 1

                    That’s really interesting! Didn’t interpret it as an attack at all, don’t worry!

                  2. 2

                    Come on now, there’s a big difference between depending on, say, BSD sockets and depending on an SSL proxy like nginx or something.

                    1. 1

                      I’m more familiar with using languages/frameworks with built-in support. If you’re implementing it by a proxy that’s obviously a whole other component to look after.

                      1. 1

                        And what exactly would that difference be?

                        1. 3

                          A separate daemon adds more complexity (and therefore fragility) to the system. BSD sockets are well understood and aren’t something the sysadmin has to manually set up and care for.

              2. 5

                So basically the author is too lazy to make his website secure and thinks others shouldn’t have to as well.

                1. 2

                  I very much agree with the sentiment, and I am very much against the push towards HTTPS, in general, but this article is pretty weak. I mean:

                  [HTTP’s] simplicity is what made the web work. It created an explosion of new applications. It may be hard to believe that there was a time when Amazon, Netflix, Facebook, Gmail, Twitter etc didn’t exist. That was because the networking standards prior to the web were complicated and not well documented. The explosion happened because the web is simple. Where earlier protocols were hard to build on, the web is easy.

                  Erm. Before HTTP, networked applications implemented their own protocols, because it was easy. And protocols were certainly documented. After HTTP we got Apache, mod_php, cgi, fcgi and java applications servers because it was not generally feasible for programs to implement their own protocols any more, so you had to invent and put all that crap in front of your program. Luckily today we see a resurgence of programs that speak directly to other applications, even with HTTP in languages like Go where it is easy to speak HTTP.

                  1. 2

                    Something the author doesn’t seem to understand is that google is trying to improve security for all users of Chrome. So it might be that no one ever gets man-in-the-middled on any of the author’s domains, but that doesn’t mean that no one will do it for latimes.com, which is served over unencrypted HTTP as of today. Chrome can guarantee you’re protected against certain classes of attacks with encrypted HTTP that it can’t with unencrypted HTTP, but if you’ve visited unencrypted websites, it can’t.

                    My suspicion is that someone at Google has a metric they’re trying to optimize for the proportion of traffic Chrome is delivering to users that they can prove has been encrypted. Unlike many metrics, this actually does seem to be one that’s good for all users. It’s certainly an inconvenience for website owners.

                    With that said, for personal websites, I agree with commenters who don’t think it’s that big of a deal, especially since Cloudflare will do it for you for free.

                    1. 1

                      A while ago I was in charge of a large project to move thousands of sites over to HTTPS. Using automation, Let’s Encrypt, mixed content, TLS terminating load balancers, certificate inventory management, monitoring and auditing: the effort wasn’t trivial. It was also absolutely necessary and this is on the high-end for what has to be done by any entity dealing with certs on the planet.

                      For the single legacy sites case that this guy mentions it is usally a matter of two simple headers:

                      • HTTPS Strict Transport Security
                      • Content Security Policy: Upgrade-Insecure-Requests

                      That’s it. Done. Legacy site converted.

                      Until HTTP gets the clickthrough treatment and browsers default to HTTPS first, every HTTP page load is a potential invitation to inject anything into that connection including 0days, cryptocurrency mining malware and tracking/advertising code.

                      1. 1

                        What about static sites? Maybe hosted on Github Pages which don’t support HTTPS on root domains. I mean with the power of Google they should not say “All sites without HTTPS are insecure” but maybe just for those who are not static and have no HTTPS installed.

                        Why I am forced to move hosting away from Github, or why force Github to support HTTPS for root domains? Why Google is giving us headache and work like Internet Explorer did years ago?

                        Maybe this is minor issue but lately I’ve found myself fixing Chrome bugs together with IE11 bugs … This is not the direction we should expect fom such a powerful player.