1. 15
  1.  

  2. 14

    Mixed thoughts about hosting this on github. The “future of the web” deserves an independent site, no?

    1. 6

      Kinda sobering that even technologists tend to cling to walled gardens as well.

      1. 10

        I tweeted something similar about IRC and Slack the other day and got hundreds of angry replies :(

        1. 9

          Unfortunately, ‘hacker’ culture (cough) seems to be much more about conformity than iconoclasm nowadays: use a ‘real’ editor, be on the right service, release lots of open source, like the right languages, always be positive.

          1. 10

            \You should use a real editor though; that’s non negotiable…\

            (of course i’m refering to ed)

            1. 3

              Real hackers use echo “printf(\"Hello world!\”)“ >> myprogram.c

              1. 3

                editing is achieved with a combination of cat and sed

                1. 1

                  This is not “hacker” news… ;)

            2. 6

              Sad state of the global community; if it ain’t on github, using slack, supported via twitter…. got the same shit from collegues when proposing IRC over slack since we’re hitting their 10k limit and can’t justify their pricing.

              1. 2

                I just had to look up slack. I’ve seen it mentioned, but didn’t know what it was. I guess hipchat isn’t hip anymore?

                1. 2

                  HipChat is too expensive. There would never have been an opening for Slack if their pricing were more reasonable, but it isn’t so there is.

                  1. 1

                    Isn’t slack 4 times more expensive than hipchat?!

                    For both, the lowest cost paid tier is:
                    hipchat = $2/mo per user
                    slack = $8/mo per user ($6.67/mo per user if paid yearly)

                    Both have a free tier. Slack limits the number of “integrations” with their free tier, and hipchat drops the “audio/video chat” feature on free tier.

                2. 0

                  If you’re using twitter to complain about people using private closed chat services then you deserve everything you get.

                  1. 6

                    I (and some friends) actually developed a complete, standards compliant microblogging service. Which no one then used.

                    I complain about having to use Twitter on Twitter a lot too.

                    1. 1

                      I (and some friends) actually developed a complete, standards compliant microblogging service. Which no one then used.

                      And I set up an IRC server at my job. Which no one then used.

                      1. 1

                        Please try again, this time give out limited invites…

                  2. 7

                    Is this really a walled garden? How is the content being restricted?

                    1. 8

                      Walled garden is perhaps the wrong term, but it’s still “controlled by private entity”. Imagine the uproar if it were http2.microsoft.com. Half the comments would be “microsoft is evil incarnate”, and half would be “no corporation should control this”. Github apparently passes the evil incarnate test, but the second half still applies, no?

                      1. 4

                        For one, a GitHub owned domain is now the official domain of the most important protocol on the planet. If GitHub goes away or changes by doing something like removing this feature, alllll those links forever need to be changed. And why privilege GitHub over all the other organizations and people involved with HTTP2?

                        1. 5

                          They could have used a custom domain with github, and transferred that somewhere if they needed to. I kind of feel like this would have made the most sense. Either way, that was just a decision and doesn’t really reflect upon Github as being a walled garden (imo).

                          And why privilege GitHub over all the other organizations and people involved with HTTP2?

                          I don’t know. I didn’t look at it from a privilege point of view, but from a usefulness point of view. At least on Github it becomes very easy for people to propose changes and host non-spec related things.

                          Either way, that doesn’t seem like a “walled garden” issue. The domain name issue also doesn’t, but it does still seem to me like a reasonable issue to be concerned with.

                          1. 5

                            Yes, a custom domain would have been much, much better, even if it was hosted on GitHub behind such.

                        2. 2

                          I’m being grumpy and overdramatic here. (I also take issue with the prominent Twitter iframe on the page.)

                          Just seems a bit odd to have this hosted on a third-party service. Maybe they will take PRs for the protocol! (I kid.)

                      2. 3

                        I thought cURL’s 17th birthday article[1] gave an interesting perspective:

                        We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

                        The URL churn that comes with that makes me sad though.

                        [1] http://daniel.haxx.se/blog/2015/03/20/curl-17-years-old-today/

                        1. 2

                          Just keep your own domain. Curl has been curl.haxx.se for longer than I can remember.

                          1. 1

                            Which is cool for the static content, but as soon as you start caring about repo/wiki URLs (they link to one from the front page[1]), you’re much more tied to the service.

                            [1] https://github.com/http2/http2-spec/wiki/Implementations

                      3. 5

                        This article by Poul-Henning Kamp is a good insight into the problems with HTTP/2.0

                        1. 10

                          As much as I love a good PHK brickbat, I keep seeing people reference this article and I don’t know why.

                          He doesn’t list much in the way of substantive complaints about anything but the IETF and the political economy of the web. Here are literally all of his technical complaints:

                          HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc.

                          HTTP/2.0 also does not improve your privacy. … The good news is that HTTP/2.0 probably does not reduce your privacy either.

                          You may perceive webpages as loading faster with HTTP/2.0, but probably only if the content provider has a global network of servers.

                          Nobody has demonstrated a HTTP/2.0 implementation that approached contemporary wire speeds. Faster? Not really.

                          HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause increased CO2 pollution adding to climate change.

                          To summarize:

                          • It’s net-neutral on privacy since literally all of the major players in web politics don’t give a lonely shit about privacy.
                          • PHK finds it aesthetically displeasing.
                          • He thinks it’ll require more CPU but be slower.

                          I think his points about the market representation of privacy are pretty apt, but I’m underwhelmed by his technical critique. HTTP/2 is a large—if incremental—improvement over HTTP/1.1 and I look forward to its adoption.

                          1. 4

                            Well, of course, in the end, the political economy of the web matters, and technical complaints don’t, except insofar as they make people’s lives better or worse. In that context, increasing the advantages of content providers with a global network of servers over regular people seems like a pretty enormous problem, to me.

                            1. 3

                              You’re assuming PHK’s correct about his assertion, but he’s provided neither evidence nor argument for the case.

                              What about HTTP/2 lends a unique advantage to parties with “global network[s] of servers”? What makes HTTP/1.1 a better fit for the Mom & Pop Mainstreeters we’re worried about?

                              1. 2

                                The HTTP Alternative Services spec that is part of http2 allows a server to specify that a request can be served from another server:

                                For example, an origin:

                                (“http”, “www.example.com”, “80”)

                                might declare that its resources are also accessible at the alternative service:

                                (“h2”, “new.example.com”, “81”)

                                By their nature, alternative services are explicitly at the granularity of an origin; i.e., they cannot be selectively applied to resources within an origin.

                                Alternative services do not replace or change the origin for any given resource; in general, they are not visible to the software “above” the access mechanism. The alternative service is essentially alternative routing information that can also be used to reach the origin in the same way that DNS CNAME or SRV records define routing information at the name resolution level. Each origin maps to a set of these routes – the default route is derived from origin itself and the other routes are introduced based on alternative-protocol information.

                                This is the primary function of a CDN, and if I understand what I’ve read about http2 (I’m looking into it for the first time today), you would be able to get this info in the roundtrip of a one packet (depending on TCP Fast Open in TLS 1.3), maybe two at max. This would save a lot of hassling around with DNS for CDNs but is worthless if you have a single or non-geographically-distributed set of servers.

                                HTTP/1.1 as a text protocol is presumably easier for people with poor tooling.

                                1. 5

                                  HTTP/2 does allow servers to move some of the equivalent logic of e.g. geographic split DNS horizons into HTTP. But that doesn’t convey a unique advantage to the owners of a CDN. Most of the folks who benefit from CDNs are customers of a CDN. If my mom-n-pop bird silhouette appliqué store needs a CDN I can take my credit card over to fastly.com and it’s only $0.12 for the first TB of data or $0.0075 per 10K requests.

                                  Folks bring up the text vs. binary issue quite a bit, but I’m always confused. Who are the people with poor tooling that we’re talking about here? Do they not have browsers? Do they not have curl? Is this just nostalgia for the days when you could type a web request into telnet? Or do people honestly think that HTTP/2 is somehow harder to implement than HTTP 1’s chunked encoding?

                                  What I’m not seeing mentioned much is that HTTP/2 provides a huge benefit for websites that haven’t had the benefit of the last decade of web performance engineering. For folks who don’t have a huge investment in asset bundling, asset domain sharding (e.g. assets1.example.com to get around browsers' request-per-host limits), resource inlining (e.g. defining images as base64-encoded CSS attributes), image sprites, special cookie-free-domains, etc. etc. etc. HTTP/2 will provide a massive performance benefit.

                                  1. 3

                                    How will curl’s -v and --trace work? I rely on these all the time. What happens when I want to see exactly what’s happening at the protocol level? I won’t be able to read the raw binary. Presumably curl will try to give me some human-readable representation, but what happens I’m trying to debug a problem caused by a web server and curl decides that its response isn’t valid HTTP/2?

                                    1. 1

                                      I found it advantageous on Thursday night to be able to strace the PHP interpreter and read the fragments of HTTP requests going in and out of it.

                                      Most of the folks who benefit from CDNs are customers of a CDN.

                                      You can always postulate that centralization doesn’t matter by positing that buying a service from a centralized provider is no different from doing it yourself. It’s never true.

                                      Or do people honestly think that HTTP/2 is somehow harder to implement than HTTP 1’s chunked encoding?

                                      Is that a joke or are you just hoping people don’t know what you’re talking about? I’m trying to assume good faith here, but you’re making it really difficult. Here’s an implementation of HTTP 1’s chunked encoding, which is complete except for trailers:

                                      class Chunk:
                                          def __init__(self, fd):
                                              self.fd = fd
                                      
                                          def write(self, bytes):
                                              if bytes:
                                                  self.fd.write("%x\r\n%s\r\n' % (len(bytes), bytes))
                                      
                                          def close(self):
                                              self.fd.write('0\r\n\r\n')
                                      

                                      It might even be correct.

                                      What I’m not seeing mentioned much is that HTTP/2 provides a huge benefit for websites that haven’t had the benefit of the last decade of web performance engineering.

                                      I hadn’t heard that argument before. It might be a reasonable one. Still, to substantiate that you’re arguing in good faith, I’d like you to include a full HTTP/2 implementation in your reply comment.

                            2. 1

                              Hmm, don’t have an account there…

                              1. 5

                                The version at ACM Queue should be open access: http://queue.acm.org/detail.cfm?id=2716278

                                1. 1

                                  Thanks!

                                2. 1

                                  Sorry I should have posted the public link.