1. 6

    Very interesting! Did anyone here run rqlite in production and have some insights to share?

    1. 4

      Ditto. Just started playing around with sqlite and they claim to be able to handle “400K to 500K HTTP requests per day, about 15-20% of which are dynamic pages touching the database.”

      edit: Great slides!

      1. 3

        I clicked through to ask much the same question. Maybe with a slightly finer point on it:

        Since it is not a drop-in replacement for sqlite, how does the level of migration effort stack up for moving to this from sqlite, as opposed to moving to some other RDBMS that natively supports replication. With the natural follow-on: how does the performance versus those other RDMBS stack up?

      1. 15

        Setting up PTR (or rDNS) record on AWS is only achievable via a request ticket and requires several exchanges. In comparison, on UpCloud (our current cloud provider) this could be done directly on the dashboard.

        A lot of people use EC2 VMs as “stable” servers, and it’s fine, but my theory is that they were not designed as such in the first place. I mean, it’s in the name: Elastic Compute Cloud. For an elastic server, rDNS is typically not a hard requirement, neither is a “clean” IP address. That may explain why there isn’t a simple and easy UI to change rDNS records.

        Did you consider hosting your own MTAs in a datacenter? It may be quite expensive (maybe around 1-2k€/mo in Paris for a half-rack and a /28 of addresses), but you can have your long-term IP addresses blocks and make sure your IPs are always clean.

        1. 4

          1-2k per month is a bit too expensive for us at the moment but we might need that in the future, which data center do you recommend for this option?

          1. 4

            I have experience with only one DC in Paris, which is “Zayo Poissonière”, located in the 2nd ward. The security is good, and the location is super convenient (which is important if you have employees based in Paris). I guess other datacenters located in the suburbs are more affordable, but much less convenient.

            1. 5

              You can always buy this management service to a company.

              In a previous job, we used to have racks in 2 equinix datacenters in Paris, operated by another company. We had access to servers (even the management interface), but we had to go through tickets when asking for networking changes (because we were connected to their network infrastructure to avoid running ours), but also all physical maintenance like changing disks or racking new servers. They took care of our IPs too (we had a /26), so all BGP etc.

              I found that this solution was the perfect mix between not using a cloud provider, and still not running everything ourselves. If you want to go self hosted to that point, I think this is a really neat approach.

              1. 2

                Agree, renting a bunch of servers can be a very nice solution. You get exactly the specs you want, you get real hardware (and you can get an actual physical private network for them), and you still never have to go to a datacenter.

                1. 1

                  Thanks for the advice!

                2. 2

                  Thanks, the location is perfect! Just bookmarked their website for future use.

            1. 3

              In my previous job, we did the mistake of trying to rewrite our huge legacy app that served customers day afyer day. Everyday we were flooded by bug-related tickets, so we decided it was time to rewrite something new from a sane ground. We were naive, because it just ended as the post described: two apps to maintain and deploy.

              Eventually, we overcame our fear of the legacy code and we took the bull by the horns, by dealing with that legacy code! It worked better. Much better.

              We used that technique on the API level. The legacy app was a (mostly) SSR web app. We implemented new features in a API-only manner in a special new namespace for endpoints: /api/new/.../....
              The UI was client-side-rendered, but the final user couldn’t tell the difference between legacy and new pages!

              When we had trouble understanding a feature in the original code, we simply wrapped our new API around it, so we could buy some time to understand and re-implement or clean that part, if needed. Again, in a fully transparent manner for the final user (except that the app was faster or less buggy after the clean/rewrite of a feature).

              1. 8

                Tech tutorials on Medium are the worst. I don’t have an account so I always get the paywall, and it’s annoying enough to have to open an incognito window or another browser that I usually just keep looking for another answer.

                I get why non-tech writers would use something like Medium, but I wish more developers who write would just set up their own site and use Netlify. I think writing in markdown is a much more pleasant experience, too.

                1. 5

                  The lack of proper syntax highlighting is a real deal breaker for me as an author. You are left with screen grabs of your editor window or “HOSTED WITH HEART BY GITHUB” gists embeds everywhere, which are terribly painful to work with.

                  1. 2

                    Agreed. I have no idea why anyone uses it for any writing that involves code.

                    1. 1

                      Because it’s by far the easiest option. I don’t have to fart around with anything.

                      1. 1

                        What about dev.to? Just as easy, no paywall.

                  2. 1

                    I don’t have an account so I always get the paywall, and it’s annoying enough to have to open an incognito window or another browser that I usually just keep looking for another answer.

                    You might like the Cookie AutoDelete extension.

                    1. 1

                      Me too, I have to open medium links in a private tab. Not only to avoid the paywall but also to escape from the Google Accounts 3rd party cookie, which is impossible to block on Firefox for iOS.

                      Now, on my phone, I don’t even click on Medium links anymore.

                    1. 18

                      Writing more C. After writing so much Rust and C++ recently it’s been lovely stepping back into a simpler, better (subjectively! in my opinion!) language that really forces me to think about every line of code. C really gets out of your way and lets you focus on what you want your code to actually do instead of spending so much time getting bogged down in how to express code. “There’s just functions and pointers and that’s all you’re going to get so just get on with it.”

                      1. 4

                        This perspective is super interesting to me since I find needing to do the malloc() / free() dance to be the ultimate getting in my way :)

                        I’ll admit up front I’m not well suited to writing super low level code. I suspect I could develop that discipline if I worked at it, and maybe someday I’ll have time, but for now I’m just trying to attain mastery of my simple old Python world :)

                        1. 9

                          The more C I write, the more I realize how little I actually need to allocate by just looking ahead and defining and error checking “reasonable” bounds. Adding error checking around these bounds has caught many errors, like string parsing issues which would have otherwise been missed and been bugs later.

                          1. 2

                            Having trouble understanding what you mean here. I realize it’s a stretch but you wouldn’t happen to have an example you could link, would you?

                            1. 9

                              e.g. Rather than allocating names for shader uniforms, I define a maximum length for a uniform name, check it against the graphics API, and then don’t have to allocate:

                              typedef struct {
                                char name[RLL_MAX_UNIFORM_NAME_LENGTH];
                                GLuint location;
                                // snip...
                              } Uniform;
                              

                              I don’t allocate lists of uniforms either, I just use a fixed size array, and then verify the number of uniforms I’m using is less than the maximum supported, and than every shader uses less than that value as well.

                              typedef struct {
                                Uniform uniforms[RLL_MAX_UNIFORMS];
                                uint32_t numUniforms;
                                // snip...
                              } Program;
                              

                              I think I only allocate in my game engine so far on file reads, such as for images (for now, it’s probably going to change to reuse specially allocated blocks for that, or just parse 4KiB at a time with a static buffer or something).

                              1. 2

                                Ah I get it! That’s very cool.

                                So you’re letting the compiler handle data structure allocation for you.

                                That makes a lot of sense.

                                1. 2

                                  It is often advantageous to break your structure up column-wise:

                                  struct {
                                    char name[RLL_MAX_UNIFORM_NAME_LENGTH][RLL_MAX_UNIFORMS];
                                    GLuint location[RLL_MAX_UNIFORMS]; 
                                  

                                  This tends to be faster when you’re more likely to be working on a list of locations, instead of the list of pairs (names and locations).

                                  I wish C compilers had an option to group members together in memory so you could write things using nested structs.

                                  1. 2

                                    Very true. I’m going for “Make it work” right now, and was then going to profile to determine how much faster “Struct of Arrays” is in this case. A major component of what I’m doing is to document the rationale (with data where appropriate) to help people learn about the decisions which go into designing these types of things.

                                    The idea is a simple but functional game engine which kind of “grows up with you” to get people (especially kids) started programming, where you can get started learning by writing Python code which the driver will run, and then as you learn more you can get into the guts of extremely well-documented C code with rationales and write plugins and games.

                                    1. 1

                                      As much as I hate to admit it, an array-of-struct to struct-of-array transformation would probably be relatively easy to implement using Rust’s new procedural macros.

                                      1. 2

                                        Why would you hate to admit that?

                              2. 2

                                Don’t worry I absolutely love Python too (but not Python 2!) :)

                                1. 3

                                  I really prefer 3 myself. I know some people feel like they’ve ruined the language but I really enjoy all the new feature comforts like f”” and dicts being ordered by default :)

                                  1. 2

                                    They’re absolutely wonderful, aren’t they? Maybe a little too easy to use, if anything. I saw f"some static string with no formatting" the other day in a friend’s code and he was very embarrassed.

                              3. 1

                                I’m curious about the experience of the C language after using Rust. Do you think in terms of lifetimes and ownerships in your C code now?

                                1. 3

                                  I thought about lifetimes and ownership in C code before I had even heard of Rust. I’m pretty sure modern C++ (and to a lesser extent Rust) exposure has strengthened my thinking about object lifetimes, but the concept has been quite common in (well-written) C code forever.

                                  One of the nice things about C is that you can reason about lifetimes that your compiler can’t understand without the compiler throwing a tizz :P

                                  1. 3

                                    However, the downside is that the compiler doesn’t point out lifetime problems in places the programmer can’t understand.

                                    1. 7

                                      Quite right! But I try to write nice simple code so that I can still understand it many months later.

                              1. 6

                                Publishers and other site owners feel forced to use AMP as they fear that they’ll lose Google visibility and traffic without it.

                                By the way, is this fear justified? I mean, did they conduct serious studies to come to the conclusion that they must use AMP?

                                I suspect there is a lot of superstition here, but I may be wrong.

                                1. 12

                                  Yes.

                                  Amp-enabled pages can be featured in the ‘carousel’ above regular pages.

                                  If you think Google isn’t doing everything they can do push every man and his dog to adopt AMP, or that they’re doing it for anything other than gaining more control over the web, I’m sorry but you’re either wrong, or naive about Google and it’s tactics.

                                  1. 1

                                    Frankly, I don’t know. That’s why I ask. I believe some press publishers do not need to rely heavily on SEO or Google-compliance because that’s not part of their core strategy. They have other (and better qualified) channels. In this situation, I wonder why it is required to implement AMP. Warning: I may be biased here, because I usually don’t rely on a search engine to find press content.

                                    1. 2

                                      The way I’ve heard it is: AMP is a bundle of requirements to get fast, mobile-friendly pages, and the key word is bundle.

                                      If you make a proposal to implement the bundle, you have to win through one meeting. After that, there’ll be 17 subtasks on jira to implement each of the 17 requirements, but no more management discussion. (Or if not 17, then however many requirements apply to you, which may be higher or lower than the list in the AMP spec. The precise number doesn’t matter.)

                                      If you want to get the same speed without the bundle, you have to propose each of those 17 tasks. Now you have 17 meetings and have to justify each of 17 tasks separately. Management will be sick and tired before you reach 10.

                                      A consequence of this view is that the two ways to get rid of AMP are:

                                      • make most people accept slowness and bloat
                                      • define another bundle, with a buzzword-worthy name, that provides the same single-meeting advantage

                                      I personally don’t think the latter is doable. The ship has sailed. The name for that other bundle now would be AMUNIH, short for Accelerated Mobile Uhm, NIH.

                                      1. 3

                                        The idea of putting more thought into building fast and efficient sites is absolutely something web developers should be doing. But that’s not only what AMP is about, and I’m not even sure if AMP is good at that.

                                        The way that Google imagines AMP is that anybody uses a specified subset and format of HTML alongside a fixed set of JS libraries. AMP HTML is pretty much standard HTML, but with lots of added WebComponents and some JS libraries that are supposed to make resource loading more efficient.

                                        Now, the thing is that if you want to use AMP right (and get the gentle preferred treatment in some cases), WebDevs are supposed to load all those AMP WebComponents from Google-run CDNs (cdn.ampproject.org). This alone puts Google and the people developing AMP into a position where they have total control, and where they are also the single point of failure on all websites. It’s not like a library that you embed for a specific feature that can fail gracefully - if the AMP JS libraries fail to load, your AMP website is literally a blank page. There is no room for graceful degradation.

                                        And even if we talk about AMPs actual effectiveness, I’m… honestly unsure if this works. I wanted to pick a good article, so I searched Reddit for posts that have “amp” in the URL and used the one with the largest number of upvotes. This AMPified CNBC article came up (note that clicking the link might forward you to the desktop version if you don’t have a Chrome-mobile-y user agent string). Looking at the devtools, it does a total of 24 requests for CSS and JS files, totaling at 1.91MB (although they were compressed, and only 571KB had to be transferred, but that’s still a lot). 14 of those requests ended up in the AMP CDN, but there are also requests to CNBC’s servers, as well as a couple of external services, including ad- and tracking scripts. Document loading finished in 874ms, and that’s on a super powerful laptop. That’s not a slim, fast website.

                                        It gets much worse if you look at the way Google handles AMP links inside their products. If you access an AMP search result, for example, you don’t even access the original servers. The article I shared was linked in this Reddit post, and if you look closely, they don’t even link to the CNBC article itself. The link goes to https://www.google.com/amp/s/www.cnbc.com/.... Google is proxying that request, and AMP-site owners have no control over what Google is doing. Right now, it looks like Google is only adding a top bar that brings you back to the search results and offers a “share” button, but… who knows. Stuff like that puts even more control into Google’s hand.

                                        I certainly have lots of issues with supporting a technology that requires everyone using it to depend on a single point of failure. And I also don’t like the fact that AMP decision-makers can make completely arbitrary decisions just because.

                                        That’s not really how the web is supposed to work.

                                        (Disclaimer: I’m a Mozilla-employee, and even though this is my private opinion, I’m obviously biased. And sorry for my rant. :))

                                        1. 1

                                          I’ve been involved in only one AMP project, and in that, all of the nontrivial work tickets were about killing go-slow shit (my opinion, naturally). One is a small sample size, maybe others are different.

                                          There were some trivial changes that didn’t seem particularly clueful, but who cares about trivial stuff. And there was the Google integration, which I though was upgefucked and vile, but I couldn’t see any better way. Still can’t, can you? It should be an absolute defence against arguments that just one additional tracker or blah-blah integratrion won’t hurt and getting rid of it requires {renegotiating a contract,finding another way,a lot of work}. AMP leverages Google’s famous stonewall. Noone can discuss with Google, right? So if the site has to be accepted by code running on Google’s servers, that’s it, end of discussion.

                                          EDIT: thought about my contradiction here. Why do I think AMP’s upgefucked and vile, but still sort of approve? I think it’s because I personally want the web to work decentralised et cetera, while the key idea of AMP is about orgchart compatibility and needing little management attention. That’s a realism about corporate life that I cannot really like, even though I think I should.

                                    2. 1

                                      The first search I tried now didn’t give me a carousel at all, the second gave me a a carousel with links to three different sites, none of them with AMP.

                                      All three sites were pleasantly fast, BTW, so perhaps someone has confused “fast” with AMP?

                                  1. 32

                                    If you were to start a new commercial project today, what language would you choose?

                                    I’d choose Python. It’s not the best language, but I know it very well, I know its ecosystem very well, I know I can “easily” find other engineers to help me (and affordable) if I need. So, simply put, Python would be the best tool to make money, in my case.

                                    1. 5

                                      and affordable

                                      Hey, who you’re calling cheap!?

                                      Joke aside, do you mean affordable on the sense of there’s lot of people that know python so the scarcity doesn’t drive up salaries above average, or python developer salaries are below average?

                                      1. 2

                                        It’s an established language and platform, so there are no surprises with cost. Python is now commonplace, and while it will go into decline at some point, it will be relatively gradual. People won’t be surprised. There are plenty of companies that are stuck with platforms that have declined more quickly, but still need to be maintained long-term. Some JS frameworks like Angular are here in some markets.

                                        In terms of average salaries, there is an expectation that established platforms will not be able to sustain salaries at the level they were when the platform was up and coming. The next cool thing is somewhat unknown, so developers can justify higher salaries because they are supposed to be working on the higher risk frontier.

                                        When it comes to an individual you need to be doing more than Python to get a higher than average salary. Or, be paid to work remotely for a company that can sustain a higher salary (e.g. get a closer to SF salary, but live in the Midwest). If salary isn’t your primary goal, you now have more opportunities to practice Python because of greater market penetration. If salary is your goal, then the best plan is to be closer to the bleeding edge, and have a broader/deeper skillset.

                                        1. 2

                                          People won’t be surprised.

                                          Does no one get in trouble with clients or management due to the later costs of Python? Having to put effort into keeping the software working when Python makes backward-incompatible changes that other languages are less likely to make. Spending time after-the-fact on mypy when the software gets large. Dealing with some aspect of the program being too slow.

                                          1. 1

                                            You are taking that out of context, but I understand your perspective. My point is that there is no surprises from the ecosystem going away, or declining rapidly, which then has a knock-on impact on the market.

                                            Python applications can have many surprises in terms of costs later on as you point out. That said, I think a large part of that is to do with the inability of people to set proper expectations.

                                            Anyone with experience, should realise that the convenience of Python comes with cost. They should highlight these trade-offs in some manner with other stakeholders. Whether it’s stretching Python, or someone creating an over-engineered monstrosity with some other framework for a company with no expertise in it, it really comes back to professionalism. There are real costs when we avoid discussion of the issues, or don’t document them, but people seem to press on regardless.

                                            At other times we have to just accept that people with relatively little experience will make important decisions. We have to accept them and develop a plan for resolving the issues. If these decisions are happening continuously around us, then moving on, or developing better influencing skills seem to be the only options.

                                    1. 4

                                      I have nothing against the 10 values of the post that seem sound to me. But, those values apply to other languages as well (Except maybe the concurrency part.) When I write Python, PHP or JS, I strive to handle errors explicitly, avoid nested blocks, keep things simple, etc.

                                      But I could say the same thing about the Zen of Python.

                                      1. 3

                                        I’m not skillful enough at CSS to just look at the code and tell, so, is it responsive?

                                        1. 4

                                          There is an accompanying post that uses it. It looked good and read well in all the testing I did.

                                          1. 2

                                            thanks!

                                        1. 4

                                          In Japan, bookstores are full of very cute manga guides for various programming languages, databases, operating systems, … With real stories and everything in it.

                                          And, those books are not intended for children particularly, they are intended for students or professional IT engineers!

                                          1. 1

                                            It’s odd that we don’t see that in english-language programming books. (Closest I can think of is the pseudo-narrative of Seven Languages in Seven Weeks, where the author compares each programming language to a popular film & uses references to the film to explain language concepts.) Maybe it’s because of the “two cultures” (still unfortunately common in CS undergrad in my experience, but a common enough comic trope in anime that I can’t imagine people in japan really taking it seriously).

                                          1. 1

                                            I remember using a poor man’s Python web server on an old Android phone with stock OS (non rooted). I just installed QPython3 app from Google Play, then I ran my Python+Bottle project on it. Sometimes, the wifi went off for battery efficiency, thus making the server down. I was able to disable this feature so the wifi of the device was always on, as well as the QPython3 process.

                                            The last step was doing the NAT forwarding with the router and I was all set!

                                            It was pretty fun.

                                            1. 3

                                              I worked in the TV industry a few years ago. Our customers had to store a lot of TBs of their 1080p or 4K videos (sometimes, it was the super heavy raw footage as recorded by the camera). Tapes (LTOs, to be precise) was the only viable solution and it worked great and it’s very cheap compared to disks. For example, a HP 30TB LTO-8 cartridge is 180 EUR, which is 6 EUR/TB.

                                              However, this is for archiving only, not storing.

                                              1. 1

                                                It does not reflect the realities of modern hardware, where computation is almost free, memory size is almost unlimited (although programmers’ ingenuity in creating bloated software apparently knows no bounds), and the principal limit to performance is the cost of communication.

                                                Well, I copy-pasted this with the intent to argue against it, but after a second thought I think he’s right. For my graduation I actually did a project where it turned out to be faster to recompute a sparse matrix from scratch than to load it from memory (granted, the computations where done on an FPGA, so that helps).

                                                When people optimize programs a lot of their mental effort goes into:

                                                • What is the fastest way to do this? Should I store intermediate results, use the GPU, use multiple threads?
                                                • Woah, programming/concurrency/GPU’s are hard! Is this actually correct?

                                                If you could specify a computation from a high level to a compiler with a approximate model of the costs of GPU offloading, spinning up a thread, etc., it might be possible to write optimized programs much faster.

                                                1. 2

                                                  If you could specify a computation from a high level to a compiler with a approximate model of the costs of GPU offloading, spinning up a thread, etc., it might be possible to write optimized programs much faster.

                                                  …and the performance would vary vastly based on version of compiler, version of runtime, target platform, etc, and it would be great until you have to trick the optimizer to doing what you actually wanted or it suddenly doesn’t work on someone else’s system for Mystery Reasons. The idea’s not without merit, but there’s a lot of complexity there.

                                                  1. 1

                                                    I’m not denying that. I’m just observing that a lot of effort goes into implementation details.

                                                    A lot of the complexity stems from dealing with a non-fixed environment and there is no easy way to avoid it.

                                                  2. 1

                                                    If you could specify a computation from a high level to a compiler with a approximate model of the costs of GPU offloading, spinning up a thread, etc., it might be possible to write optimized programs much faster.

                                                    If such language exists, what would it look like? Maybe something like Prolog or SQL?

                                                    1. 2

                                                      I’d imagine some mix between OpenCL and Haskell would work. You’d need annotation to indicate expected values of variables. If you’d loop through something 10 times it’s probably not worth it to put it on a GPU. If you do it 10 billion times and it’s highly parallel, it might be worth it.

                                                  1. 12

                                                    I like this idea of “low-tech”. In one of my previous companies, I liked to keep track of what I was doing during my working days. Software exists for that, and my company had one, but I never used it.

                                                    Instead, I just kept all my task in a hand-written CSV file which was always open in Vim. It looked like this:

                                                    date, customer, what I did, time taken (min)
                                                    2016-08-03, Customer #1, fix API, 120
                                                    2016-08-03, , improve UI of dashboard, 30
                                                    

                                                    Searching in my log when I had to do reports was a breeze: just a plain text search on a date, a customer, etc. If I needed to, I could even use a CSV querying tool (but I never needed to).

                                                    1. 5

                                                      Can anyone explain why Clear Linux is consistently winning on benchmarks over Fedora/Ubuntu?

                                                      https://clearlinux.org/ says “Highly tuned for Intel platforms where all optimization is turned on by default.” - is that what this boils down to (-O3 -mtune= all the way, Gentoo style), or are the developers doing other clever things in the kernel/desktop env/etc?

                                                      1. 8

                                                        Have you been hearing about any other benchmarks, than those being done by phoronix? They seem to be the only ones talking about the distro…

                                                        1. 3

                                                          You could try it yourself, it’s just an ISO you can download. My experience matches up pretty closely with the phoronix benchmarks, but package support is severely lacking so I don’t use Clear anymore.

                                                        2. 4

                                                          It’s a combination of compiler flags like the ones you mentioned and setting the CPU governor to “performance”.

                                                          It also sprinkles in several other minor optimizations, but those two get you 95% of the way there and can be done on any source-based distro.

                                                          1. 2

                                                            Aren’t they testing it using AMD CPU?

                                                            1. 2

                                                              Yes, but perhaps it’s only important that they’re compiling for modern CPUs?

                                                              Looks like they’re probably not compiling with ICC or the performance would probably be worse on AMD than Ubuntu using GCC or clang.

                                                              1. 1

                                                                AMD also makes CPUs for Intel platforms. In fact, that’s probably what they are most known for.

                                                                1. 1

                                                                  Are you talking about x64 (AKA x86-64)?

                                                                  1. 1

                                                                    Yes, which in an ironic twist I call amd64, to separate it from Intel’s IA-64.

                                                                    1. 1

                                                                      So that’s not really an “Intel platform”… unless you were using the term to refer to the x86 line.

                                                                      1. 1

                                                                        Which is clearly how it was used in the context we’re discussing.

                                                                        1. 1

                                                                          Clear to you, yes.

                                                              2. 2

                                                                Copying part of a comment [1] from the article:

                                                                well it is worthwhile to have a look at their github repo - there is more ongoing eg. plenty of patches adding avx support to certain packages.

                                                                [1] - https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1157948-even-with-a-199-laptop-clear-linux-can-offer-superior-performance-to-fedora-or-ubuntu

                                                                1. 1

                                                                  I don’t know about their Intel optimizations, but if it’s only that, it may be interesting to see how Clear Linux compares with a mainstream distribution on which the kernel has been compiled with all flags set.

                                                                1. 18

                                                                  I don’t think we should change the protocols and force every library in every language on every platform to update mountains of code to support a new protocol just so my browser can download Javascript trackers and crappy Javascript frameworks faster.

                                                                  1. 17

                                                                    I’m excited for HTTP/3 because it will allow me to get lower-latency video streaming support for my private stream server.

                                                                    1. 15

                                                                      Well, just like with HTTP/1 and /2, the old protocols are very likely to be supported for a very long while. So you’re not forced to update.

                                                                      1. 12

                                                                        It’s still change just for the sake of allowing people to build even more bloated websites.

                                                                        Making HTTP more efficient isn’t going to mean websites load faster, it means people are going to stuff even more tracking and malware and bloat into the same space. It’s very, very much like building bigger wider roads with more lanes: it doesn’t alleviate congestion, it just encourages more traffic.

                                                                        1. 27

                                                                          I don’t think that’s entirely true, HTTP/3 does address some problems that we have with TCP and HTTP in modern network connections. I encounter those problems every day at work, it’s just background noise but it annoys users and sysadmins.

                                                                          1. 14

                                                                            As I understand that video, HTTP/3 is not a new protocol, but rather “HTTP/2 over QUIC”, where QUIC is a replacement for TCP. QUIC can be useful for a lot of other applications, too.

                                                                            People do a lot of stuff to work around limitations, like “bundling” files, image sprites, serving assets from different domains, etc, and browsers work around with parallel requests etc. So it saves work, too.

                                                                            Whether you like it or not, there are many WebApps like Slack, GitHub, Email clients, etc. etc. that will benefit from this. Chucking all of that in the “tracking and malware”-bin is horribly simplistic at best.

                                                                            Even a simple site like Lobsters or a basic news site will benefit; most websites contain at least a few resources (CSS, some JS, maybe some images) and just setting up one connection instead of a whole bunch seems like a better solution.

                                                                            1. 8

                                                                              Don’t you think that people are going to stuff even more bloat anyway, even if everybody downgrades to HTTP/1?

                                                                              1. 6

                                                                                I don’t know that people will drive less if you make the roads smaller. But they won’t drive as much if you don’t make the roads bigger in the first place. They’ll drive less if you provide bike lanes, though.

                                                                                In an ideal world AMP would be like bike lanes: special lanes for very efficient websites that don’t drag a whole lot of harmful crap around with them. Instead they’re more like special proprietary lanes on special proprietary roads for special proprietary electric scooters all vertically integrated by one company.

                                                                          2. 9

                                                                            The old protocols over TCP provide terrible experiences on poor networks. Almost unusable for anything dynamic/interactive.

                                                                            1. 1

                                                                              TCP is specifically designed and optimised for poor networks. The worst networks today are orders of magnitude better than the networks that were around when TCP was designed.

                                                                              1. 13

                                                                                There are certainly types of poor networks that are ubiquitous today that TCP was not designed for.

                                                                                For instance, Wifi networks drop packets due to environmental factors not linked to congestion. TCP data rate control is built on the assumption that packets are dropped when the network is congested. As a result, considerable available bandwidth goes unused. This can qualify as a terrible experience, especially from a latency point of view.

                                                                                If your IP address changes often, say in a mobile network, you lose your connection all the time. Seeing that connection == session for many applications, this is terrible.

                                                                                Also many applications build their own multiplexing on top of TCP, which, constrained by head of line blocking, leads to buffer bloat and a slow, terrible experience.

                                                                                1. 5

                                                                                  Related to this:

                                                                                  https://eng.uber.com/employing-quic-protocol/

                                                                                  Mobile networks are a prime target for optimizing latency and minimizing round trips.

                                                                                2. 1

                                                                                  It was designed when latency didn’t matter. Now it does matter. Three-way handshakes and ACKs are killing us.

                                                                                  1. 1

                                                                                    It seems to me that every reasonable website I use is fine with those tiny inefficiencies because they’re generally efficient anyway, while bloated malware-filled tracking javascript-bloated nightmare websites are going to be bad either way.

                                                                                    Who is this actually helping?

                                                                                    1. -2

                                                                                      It’s helping people with actual experience in this area. Please stop posting these hyperboles and refrain from further comments on this topic. You’re wasting everyone’s time with your hot takes.

                                                                                      1. 0

                                                                                        Leave the moderation to the moderators. My opinions are pretty widely held and agreed with on this issue. Degrading them as ‘hot takes’ is unkind low effort trolling.

                                                                                        If you have a genuinely constructive comment to make I suggest you make it. If you don’t I suggest you stay quiet.

                                                                                        1. 1

                                                                                          I do not refute that there are issues with tracking and malware but if you think we are going to regress an era without a rich web you are out of your gourd. There is no future where the web uses fewer requests. The number of images and supporting files like JavaScript will only increase. JavaScript may be replaced in the future with something equally capable, but that still will not change the outcome in any appreciable way.

                                                                              2. 6

                                                                                Without even talking about HTTP/3, it seems that any application that uses a TCP or UDP connection could benefit from using QUIC: web applications yes, but also video games, streaming, P2P, etc…

                                                                                Daniel Stenberg also mentioned that QUIC would improve client with a bad internet connection because a packet loss on a stream does not affect the others, making the overall connection more resilient.

                                                                                I do agree it could and will be used to serve even more bloated websites, but it is not the only purpose of these RFC.

                                                                              1. 1

                                                                                Last year, I worked on an online medical reservation platform built from scratch with Flask, for a client. Unfortunately, the project did not succeed commercially. The technical side was moderately complex and I think I could save some time if I was using Django, but Flask did very well and it was fun.

                                                                                1. 25

                                                                                  To be fair, I find it hilarious that every browser includes the “Mozilla” string in its user agent, dating from the late 90’s. As much as it pains me to say it, Google may be right here: the header is at best vestigial.

                                                                                  1. 2

                                                                                    I think it is weird that they still do; does anyone bother checking that part when sniffing anymore? I’d be surprised if anyone has for the last fifteen years.

                                                                                    1. 2

                                                                                      I know there are webmasters that use its presence to distinguish between bots (which typically don’t have it) and browsers (which usually do). It’s a heuristic, but it’s actually really good.

                                                                                      1. 4

                                                                                        I had to change my feed reader’s user-agent to lie because of this. It’s nonsensical, of course — RSS and Atom feeds are made for bots!

                                                                                        1. 2

                                                                                          Looks like a configuration error from the Web server or app. Maybe they just tell Nginx or their app to deny anything which is not a browser, forgetting to handle special cases like RSS.

                                                                                          1. 1

                                                                                            Looking at the code it was actually a request to SquareSpace, and the poison seemed to be mention of “Twisted”. Best guess they are trying to ban Scrapy which uses Twisted internally.

                                                                                            I’ve also seen CDNs reject requests when the User-Agent string contains “python” or lacks “Mozilla”. I guess lying is just part of HTTP these days.

                                                                                        2. 6

                                                                                          between polite bots and (browsers and evil bots) <3

                                                                                          1. 3

                                                                                            Perfect is the enemy of the good. Anyone might come into my house and rob me, but if someone knocks on my door and tells me they’re going to rob me, I’m still not going to let them in just because they asked permission.

                                                                                            1. 1

                                                                                              If they’ll say it in a certain way, and they will act in a certain way, you will be thankful for them for the opportunity for them to rob you.

                                                                                              Well, not you in particular, but people in general.

                                                                                          2. 1

                                                                                            indeed, but you don’t check for the Mozilla thing there!

                                                                                          3. 1

                                                                                            GitHub uses User Agent sniffing. I set my User Agent to “Firefox” (general.useragent.override) and some features on the site no longer work and GitHub complains that it doesn’t support old browsers.

                                                                                        1. 1

                                                                                          I use the “+” trick and I use imapfilter to just delete all mails received for the defined “+”-alias when I’m done.

                                                                                          I would like to send a hard bounce using SMTP instead, but imapfilter deals only with IMAP…