1. 1

    I’m really confused as to why partitioning the data didn’t work.

    There is one Read State per User per Channel.

    ok, so we’ve outline the partitioning boundaries …

    There are millions of Users in each cache.

    … and the fact that we’re apparently not partitioning the data.

    we figured a smaller LRU cache would be faster because the garbage collector would have less to scan

    sounds like we’re on the right path …

    if the cache is smaller it’s less likely for a user’s Read State to be in the cache.

    sorry, what? Why? It’s not at all clear why they can’t partition the data by user in such a way that leaves the hit rate unaffected.

    1. 1

      I don’t work at Discord, but I could imagine a scenario in which you’re faced with a difficult read state conundrum:

      • You want updating read state to be a single write, so you partition it by channel (otherwise you need to fan out writes to multiple consumers, and there could be a lot of consumers)
      • But if you partition it by channel and shrink the cache, you miss cache on read more often (because large Discord servers will have very active channels, and each message needs its own read state).

      The right solution is probably quite dependent on Discord’s existing architecture, but preferring cache misses on reads to multiplying writes by N (where N could be the size of an entire Discord server, assuming there’s a general channel that everyone joins) could be the right call. In turn, that could mean that a language without garbage collection, and with a large library of high-quality generic containers, could be the right choice as compared to Golang.

    1. 39

      As (former) application author I find it very hard to sympathize with distro packagers if their opinions and the mentioned patches they make out of them continue to be responsible for a good chunk of bugreports that cannot be reproduced outside of their distro. Why should I cater to the whims of multiple Linux distros, what do I get out of putting more work into the product I already provide for free? Imagine Apple app store, on top of placing random restrictions on application submissions, added random patches to your application and is not sufficiently careful about which of them break the end user experience. That is what upstream maintainers have to deal with, and they don’t even get paid for it.

      See also Linus on static linking and distro packaging.

      Keep in mind that 1) this is literally only a problem on Linux distros or other third-parties repackaging applications and imposing their opinions on everybody 2) the actual context of this blogpost is that the author is mad at Python packages using Rust dylibs, it seems his sense of entitlement has not significantly improved since then.

      1. 15

        Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

        If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

        So while we don’t have Linus dive tool in Void Linux yet, it looks easy to package and once that is done, the maintainers will take care to provide the newest version to the users, taking off work from you.

        We also generally only try to patch build issues and security fixes that did not make it into a release yet. So often, users of our binary packages get fixed versions quicker than upstream.

        1. 6

          Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

          I think the point of cognitive dissonance here is that what distro maintainers want often makes application developer’s lives harder. Dynamic linking doesn’t work well for many application developers, because libraries break even when they don’t change “major” versions: that’s just a fact of life. No software development process is perfect, and the application developer can’t reasonably test against every different patch that every different distribution applies to every different library. Being able to just drop a binary onto a machine and be confident it’ll work the same on that machine as it does on your own is a selling point of languages like Go and Rust.

          And if you want to change the libraries used for these languages it’s not exactly hard. Just change the go.mod or Cargo.toml to point to the library you want it to use, rather than the library it’s currently using, and rebuild.

          If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

          Docker and co are worse for security than static linking. Packaging as a Docker container incurs all of the downsides of static linking, and also all of the downsides of any outdated packages in the image. Static linking only distributes the libraries you need: containers distribute effectively an entire OS minus the kernel (and also the libraries you need).

          Docker as a solution only makes sense if application developers want both dynamic linking and static linking; dynamic if you install it on the host, and effectively-static if you run it as a container. But the core issue is that many application developers do not want dynamic linking! If you do not want dynamic linking, static linking is better than using containers.

          1. 3

            I think the article confuses two separable things:

            • Bundling in the shipped product.
            • Provenance of inputs.

            The former is a problem in terms of computational resource, but not much else. If a program statically links its dependencies (or uses C++ header-only libraries, or whatever), then you need to redo at least some of the build every time there’s an update (and generally you redo the whole build because incremental builds after dependency updates are flaky). The FreeBSD project can rebuild the entire package collection (30,000+ packages) in under two days on a single machine, so in the era of cloud computing that’s a complete non-issue unless you’re running Gentoo on an old machine.

            The second is a much bigger problem. If there’s a vulnerability in libFoo, a distro bumps the version of libFoo. Anything that has libFoo as a build-time dependency is rebuilt. Security update fixed, we just burned some cycles doing the rebuild (though, in the case of static linking, possibly a lot fewer than we’d burn by doing dynamic linking on every machine that ran the program). If a program has vendored its dependency on libFoo, there’s no metadata conveniently available for the distribution that tells anyone that it needs to be rebuilt against a newer libFoo. It’s up to the program author to issue a security advisory, bump the library version, and so on. The distro will keep shipping the same library for ages without any knowledge.

            Things like Docker make this worse because they make it trivial to write custom things in the build that grab source from random places and don’t record the provenance in an auditable structure. If I have an OCI image, I have absolutely no idea what versions of any libraries I’m running. They may be patched by the person who built the container to avoid a bug that caused problems for a specific program and that patch may have introduced another vulnerability. They may be an old version from some repo. They may be the latest trunk version when the container was released.

          2. 5

            Securitywise Docker images are about as bad as static linking for the end user.

            1. 3

              Of course, but it’s easier on the entire supply chain in the 99.9% of cases there is no security problem.

              1. 8

                99.9%? do you mean 40%?

                https://www.techrepublic.com/article/docker-containers-are-filled-with-vulnerabilities-heres-how-the-top-1000-fared/

                “Over 60 percent of the top Docker files held a vulnerability that had a Kenna Risk Score above 330; and over 20 percent of the files contained at least one vulnerability that would be considered high risk under Kenna’s scoring model,” […] the average (mean) number of CVEs per container is 176, with the median at 37.

              2. 3

                Yes, and static linking has a known solution for security updates: the distro rebuilds from updated source.

                1. 3

                  Yes, but this needs to be done so often and so broadly, that at least Debian just seems to do regular rebuilds of nearly everything every few weeks or so in unstable and declares that software written in Go has no proper security support in at least Debian 10 Buster and security updates will only be provided via the minor stable updates approximately every two months or so. Still a PITA and hence q.e.d.

              3. 5

                If you absolutely want to provide your own binaries directly to endusers

                You say this like it’s a method of last resort, but this is overwhelmingly how software authors prefer to package and distribute their applications. There’s lots of good reasons for that, and it’s not going to change.

                1. 1

                  Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

                  I don’t even need to do that. Again, I am providing free work here.

                  If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

                  I am fairly sure if people started to do that at scale, distro maintainers would complain all the same as they do about static linking.

                  So while we don’t have Linus dive tool in Void Linux yet, it looks easy to package and once that is done, the maintainers will take care to provide the newest version to the users, taking off work from you.

                  You’re wholly missing the point with this sentence. The fact that we’re in a position where we need to build applications per-distro is unsustainable. There is very little work in building a static binary on any other platform.

                  We also generally only try to patch build issues and security fixes that did not make it into a release yet. So often, users of our binary packages get fixed versions quicker than upstream.

                  Yes, and then the users report bugs regressions in a version that is not supposed to have the patch that introduced it. This is literally what I am complaining about.

                2. 6

                  Keep in mind that 1) this is literally only a problem on Linux distros or other third-parties repackaging applications and imposing their opinions on everybody 2) the actual context of this blogpost is that the author is mad at Python packages using Rust dylibs, it seems his sense of entitlement has not significantly improved since then.

                  How is this relevant to static linking and the discussion about its security issues?

                  1. 3

                    Because it’s the reason this discussion continues to exist.

                    1. 3

                      So in summary people are still angry about cryptography and Rust and so they keep posting roundabout takes on it and people get onto news aggregator sites to hawk their positions but not work on a solution? I’m really not sure how that’s productive for anyone.

                      1. 1

                        I publish static binaries for my applications. Now I have a third party who wants to redistribute my free work but wants me to change the way I write software so their use of my free work gets easier (for a debatable value of easier).

                        Frankly I don’t see a problem I have to solve. My way works just fine on Windows.

                        1. 1

                          At this point it’s up to all the parties to coordinate. It’s obvious that each of the different parties have different perspectives, desires, and annoyances. If you put your shoes in any of the various parties (application developers, distro maintainers, application users, distro users), and there’s plenty on this thread and the HN version of this link, then I think you can see the many angles of frustration. I don’t think getting angry on message boards is going to settle this debate for anyone, unless you’re just looking to vent, which I’d rather not see on lobste.rs and instead on chatrooms.

                  2. 5

                    This is only a problem on Linux. The fact that anybody can create a Linux distribution means that there are lot of systems that are largely similar and yet wholly incompatible with one another. Bazaar-style development has encouraged this pattern and, as such, we have a fragmentation of Linux that have just the tiniest little differences that make packaging an app near impossible to do in an universal fashion. Like it or not, cathedral-style systems do not suffer from this problem. You can count on the libc and loader to exist in a well known and understood location in FreeBSD, Windows, and MacOS. Sure, there are going to be differences in between major versions, but not so much as the difference between glibc and musl.

                    Having your own packaging system then frees you, the application developer, from having to wait on the over 9,000 different Linux distributions to update their packages so that you can use a new shiny version of a dependency in your app. Furthermore, there are plenty of commercial, proprietary, software packages that don’t need to move at the same cadence as their deployed Linux distribution. The app might update their dependencies more frequently while in active development or less frequently if the business can’t justify the cost of upgrading the source code.

                    I lay out that this situation is not unique to Linux, but rather, it exists because of Linux’s fragmentation… And secondarily as a result of the friction associated with walled-garden ecosystems like Apple.

                  1. 4

                    Even though it seems great to put the brakes on Bitcoin (& especially ETH NFT trading) speeding up climate catastrophe, I wonder whether this is more greenwashing (and whatever’s the pleasing-gamers equivalent of it) than anything else, as GPUs have been outrun so significantly for long enough by ASICs and FPGAs that big mining apps like cgminer have even ditched GPU support:

                    Q: What happened to CPU and GPU mining?

                    A: Their efficiency makes them irrelevant in the bitcoin mining world today and the author has no interest in supporting alternative coins that are better mined by these devices.

                    1. 6

                      GPU mining rigs still are used for ETH, since ETH is designed to be ASIC-resistant (which is presumably why the blog post only mentions Ethereum, not Bitcoin).

                      I don’t think this is greenwashing, FWIW: they’re explicitly creating a new product line targeting miners. This is an attempt to put the brakes on miners buying up lots of GPU stock and making it impossible for gamers to buy.

                      1. 1

                        Thanks for the update, for some reason I thought that serious ETH rigs had moved over to specialized chips (ASIC resistant does not equal ASIC proof, after all).

                      2. 3

                        There’s an endless stream of new coins appearing that can be mined on CPUs as the “big coins” move up the FPGA/ASIC ladder.

                        Also it’s not as if all people who want to mine coins are aware that they can’t be economically use a GPU… and there are enough of them to drive up GPU prices for gamers.

                        1. 2

                          Fair points 👍

                      1. 3

                        We should also recognize that Apple is hostile to developers. They don’t care about us anymore.

                        If they cared about us, they would get over the GPL3+ and start upgrading Bash. Instead, we have to maintain back-compat for a 15-year old version.

                        If they cared about us, they would provide us with a proper package manager. Instead, they break the OS slightly on each release and leave it to Homebrew and others to scramble. And there is nobody to help from Apple. Is it that hard to assign even a single developer that can communicate?

                        They don’t care that running macOS is prohibitively expensive for CI. Isn’t it more important to have software be well tested in a VM than rake in the last little dollar from rack-mounted mac minis?

                        Every release they break some kernel API and lock-down the system even further and take a bit of freedom away.

                        It’s sad really. Apple used to have a vibrant community of passionate developers doing cool things with their OS. And this has been stripped bit by bit.

                        1. 4

                          We should also recognize that Apple is hostile to developers. They don’t care about us anymore.

                          I hear the same thing from creatives, except it’s how much Apple prefers developers instead. “Grass is greener on the other side” happens everywhere, and it’s pretty amusing when you know it’s not the case.

                          1. 2

                            Can’t both be true? The Mac made large strides the last two years (first going back to scissor switches and then the M1), but one can hardly be blamed for believing that their focus has primarily been on iPhone, iPad, and Apple Watch from 2012-2018.

                          2. 3

                            They don’t care about us anymore.

                            The correct reply here is “they never did, in the sense you’re intending it to be read”. Being an acceptable-to-many Unix-y programming environment was a contingent side effect of the history that led to OS X, not a necessary or deliberately designed-in (as far as I’m aware) feature. Similarly, the fact that their laptops happened to be decent for a typical software developer’s daily driver was also not something that (again, as far as I’m aware) they ever specifically set out to achieve, just a side effect of decisions that they made for other reasons and in pursuit of other ways to differentiate themselves in the market.

                            I saw an analogy once to developers being a kind of creepy guy who convinces himself a girl is in love with him because she was nice to him once and can never let that fantasy go, and as harsh as it is I think it’s fairly accurate.

                            1. 2

                              I agree. I’m not a native MacOS developer (and frankly, I don’t really get the hype around “Mac design language”) but I read around the edges, and I get the impression that Apple takes decent care of the developers who develop paid and shareware software using their tools. Maybe not as well as Microsoft, but then who does? There’s rumblings about how the documentation is lacking nowadays and generally feeling left out compared to iOS, but frankly, any decent Mac developer should have seen the writing on the wall years ago and pivoted to iOS apps.

                              My point is, I don’t think Apple has been especially friendly to FOSS developers (again, like Microsoft), and I don’t get why FOSS developers have the expectation that they have been in the past.

                            2. 2

                              Instead, they break the OS slightly on each release and leave it to Homebrew and others to scramble. And there is nobody to help from Apple. Is it that hard to assign even a single developer that can communicate?

                              To put it differently, it is surprising how many FLOSS developers are willing to work for Apple for free. When they originally announced that the next macOS will run on Apple Silicon at WWDC, they talked about how all the major open source projects would support macOS on Apple Silicon. I initially thought that this could imply that they would make significant and proactive contributions to the FLOSS ecosystem. After 6 months it’s clear that they just new that the FLOSS community would scramble to get their projects running on M1.

                              It is kind of sad that people invest so much of their time in a platform of a company that rarely if ever gives back (when will Facetime be the open industry standard that they promised?).

                              Apple used to have a vibrant community of passionate developers doing cool things with their OS.

                              I loved macOS when I started using it in 2007. It had a great ecosystem of independent developers. It was a great, reliable OS that was literally years ahead of the competition. Now the hardware is awesome, but Apple has destroyed much of the indie ecosystem with the app store. Everyone is moving to subscriptions, because the App Store makes it hard to sell upgrades. In the meanwhile macOS itself became increasingly buggy and had questionable changes. Also, it’s barely an open platform.

                              But people will buy Macs anyway, because Apple made everyone believe that M1 is years ahead of the competition, while in practice Ryzen APUs are not far behind.

                              I got rid of my last Mac 2 months ago and there’s little chance I will buy a Mac again.

                              1. 4

                                But people will buy Macs anyway, because Apple made everyone believe that M1 is years ahead of the competition, while in practice Ryzen APUs are not far behind.

                                Yeah, no. My MBA is faster than my Ryzen based gaming desktop (at CPU; before I upgraded the GPU, it was even comparable graphics-wise), even in emulation. Anandtech doesn’t bullshit and they would tell you as much

                                Mac OS is a bit of a mess (eagerly awaiting to see how the porting of alternative OSes happens), I agree. People porting themselves is just what naturally happens - if someone has a system, requires an application, and can do the porting, it’s likely that someone meeting that criteria will.

                                1. 2

                                  That sounds like you’re not doing an apples-to-apples comparison of the latest gen vs the latest gen. Anandtech benchmarks show that the latest-gen Ryzen desktop CPUs are close to (or at the most expensive tier, comfortably exceed) the M1. And the only gaming GPUs that are even in the same ballpark as the M1 are multiple generations old; the M1 is equivalent to the lower end of the Nvidia 10-series. A 2070 Super comfortably leaves an M1 in the dust, and that’s not even considering the new 30-series.

                                  It’s still impressive that a laptop CPU is keeping up with desktop CPUs, given the different thermal profiles. But it’s not a blowout by any means compared to AMD, and it loses to the highest-end AMD chips. And the M1 GPU is just fine; it’s not even particularly great.

                                  1. 1

                                    Yeah, no. My MBA is faster than my Ryzen based gaming desktop

                                    The Ryzen 3700X desktop that I built last year had about the same price as the Mac Mini M1 with 16GB RAM and a 256 GB SSD. However, it has twice the amount of RAM, a 4 times larger SSD. The 3700X has slightly worse single core performance and slightly better multi-core performance and was released in 2019. The GPU in that machine is also a fair bit faster than the M1 GPU.

                                    My laptop with a Ryzen 7 Pro is a bit slower than the M1, but not by a large margin. But that is a last generation Renoir APU. The new Cezanne APUs are actually faster than the M1 (slightly lower single core performance, better multi-core performance). But at the same price as the M1, the laptop has 16GB RAM, which I extended to 32 GB, which makes it faster than the M1 MacBook in practice for my work.

                                    I had an M1 MacBook Air for a week, but I returned it because I was not impressed. Sure, the M1 is really impressive compared to the Intel CPUs in old MacBooks (which also had a terrible, loud cooling), but as I said, modern AMD Ryzen 2/3 CPUs and the M1 pretty much go toe to toe. Besides that, the M1 MacBook Air felt yet like another step in the direction of becoming an appliance, slowly more and more features of ‘general purpose computing’ are taken away. It’s not a future that I am interested in. So, that was the end of my 13-year Mac run.

                                    Maybe they will be able to beat AMD by a wide margin with a successor with more performance cores. But AMD also isn’t resting on their laurels. We’ll see.

                              1. 6

                                A few weeks ago, a different rant claimed that no progress had been made since 1996, and that there was a clear “wall” hit in 1996. It points out many innovations that have happened after 1978, although it similarly ignores and minimizes anything after its particular choice of date.

                                I don’t think any of these arbitrary years are accurate, and are the software equivalent of “I liked the band before it was cool.” It’s hard to imagine someone being given a working iPhone or driving a Tesla in 1978 and being told that this was modern 1978 technology and not being astounded.

                                If the argument is that everything after [date] is merely expanding upon existing theory and thus doesn’t count, everything after the lambda calculus was developed is baloney and all innovation stopped in 1936.

                                1. 4

                                  People were in fact driving electric cars in 1978 – the Whole Earth Catalogue used to sell kits to convert your VW bug to electric. The working iPhone example is sort of egregious because, famously, Alan Kay drew his original dynabook sketch immediately after seeing a demo of rudimentary plasma displays that the PLATO people were developing.

                                  Things in our environment do not look like they were developed in the 70s, but they look and operate more or less exactly how a relatively uncreative but extremely plugged-in fan of computer technology would expect them to work in 2021 – in other words, merely projecting that incremental progress would happen on already-existing tech.

                                  The groundbreaking tech here isn’t the iPhone. The groundbreaking tech is the thing that opened up new intellectual vistas for Kay – the flat plasma display (which at the time was capable of showing two letters, in chunky monochrome, but by the end of the 70s powered a portable touchscreen terminal with multi-language support). Anybody who knew that plasma display technology was being developed in the 70s could reasonably expect flat pocket-sized touch-screen computers communicating with each other wirelessly in a few decades, and indeed they were on the market in the early 90s.

                                  There is nothing astounding about incremental progress. The same tendencies that produce incremental progress will, if allowed to, also produce groundbreaking new tech. The thing is, for that to happen, you need to pursue the possible ramifications of unexpected elements – to see violations of your mental model as opportunities to develop brand new things instead of bugs in whatever you thought you were making. And you can’t do that if you’re in the middle of a sprint and contracted to deliver a working product in six weeks.

                                  1. 4

                                    The groundbreaking tech here isn’t the iPhone. The groundbreaking tech is the thing that opened up new intellectual vistas for Kay – the flat plasma display (which at the time was capable of showing two letters, in chunky monochrome, but by the end of the 70s powered a portable touchscreen terminal with multi-language support). Anybody who knew that plasma display technology was being developed in the 70s could reasonably expect flat pocket-sized touch-screen computers communicating with each other wirelessly in a few decades, and indeed they were on the market in the early 90s.

                                    I don’t agree. There was a huge jump between resistive and capacitive touch screens. There were a lot of touchscreen phones a decade or so before the iPhone, but they all required a stylus to operate and so were limited to the kind of interactions that you do with a mouse. The phones that appeared at the same time as the iPhone (the iPhone wasn’t quite the first, just the most successful) allowed you to interact with your fingers and could track multiple touches, giving far better interaction models than were previously possible.

                                    There were also a lot of incremental improvements necessary to get there. TFTs had to get lower power (plasma doesn’t scale anywhere near that low - I had a 386 laptop with a plasma screen and it had a battery that lasted just long enough to get between mains sockets). CPUs and DRAM had to increase in performance / density and drop in power by a few orders of magnitude. Batteries needed to improve significantly in storage density and number of charge cycles.

                                    1. 2

                                      People were in fact driving electric cars in 1978

                                      I’m aware :) But they definitely weren’t driving electric cars that talked to you, knew effectively every map in the world, told you when to make turns, listened to voice commands, streamed music and karaoke from the Internet, and could drive themselves relatively unaided on highways.

                                      They also weren’t driving electric cars that could do 0-60 in two seconds or had hundreds of miles of range. Lithium-ion batteries weren’t commercialized until the early nineties.

                                      The working iPhone example is sort of egregious because, famously, Alan Kay drew his original dynabook sketch immediately after seeing a demo of rudimentary plasma displays

                                      Sure, but the Dynabook may as well have been science fiction at the time. It’s still science fiction: Alan Kay’s vision was for it to have near-infinite battery life. Getting from science fiction to working technology takes, and took, enormous technical innovation. If you handed Alan Kay a working iPhone the minute after he saw the first barely-working two-letter plasma display, he would have been flabbergasted.

                                      1. 1

                                        Incremental improvement is useful. Sure. I said as much in the essay.

                                        But incremental improvement is not the same thing as a big advance, and cannot be substituted for one. In order to have big shifts, you need to do a lot of apparently-useless research; if you don’t, then you will only ever invent the things you intended to invent in the first place & will never expand the scope of the imaginable.

                                  1. 32

                                    Well written article and an enjoyable read. Only part I disagree with is your stance on “early exit”, it turns out that this is the tiny hill i’m willing to die on, one I was unaware I cared about until now. I think this is primarily because if I read code that has a return within an if block then anything after that block is inferred else.

                                    I could become pedantic and retort that all control flow is a goto at the end of the day but I wont, because that would be silly and this was a genuinely good read, thank you for sharing.

                                    1. 17

                                      I also was surprised how much I disagreed about the early exit.

                                      When I originally learned programming, I was told that multiple returns were bad and you should restructure your code to only return once, at the end. After learning go (which has a strong culture of return early and use returns for error handling), I tend to favor early returns even in languages that don’t use returns for error handling.

                                      The thought process I’ve adopted is that any if/return pair is adding invariants, eg if I’m at this point in the program, these previous statements must be true (or they would have exited early). If you squint at it, you’re partway to pre/post-conditions like in Eiffel/Design By Contract.

                                      1. 3

                                        When I originally learned programming, I was told that multiple returns were bad and you should restructure your code to only return once, at the end.

                                        Ah, so functional programming </sarcasm>

                                        1. 2

                                          Pure functional programming is all about early returns, if anything. There’s just no return keyword. When everything is an expression, you can’t store now and return later.

                                          1. 1

                                            In a pure functional language the whole function is a single expression – I fsil to see how it is “all about early returns”? Certainly you can simulate imperitive return or raise using various tricks, but ultimately there is always just one expression and that is what gets returned, anything else is syntactic sugar.

                                            1. 3

                                              Conditionals and pattern matching are expressions. This means you’d have to put effort to avoid an early return.

                                              Consider a function that converts boolean values to string in the canonical structured style with a single return.

                                              function bool_to_string(x) {
                                                var res
                                                if(x) { res = "true" } else { res = "false" }
                                                return x
                                              }
                                              

                                              In a functional style it’s most naturally written like this:

                                              bool_to_string x = if x then "true" else "false"
                                              

                                              We could put an extra effort to store the return value of if x then "true" else "false" but it looks like obviously useless effort:

                                              bool_to_string x =
                                                let res = if x then "true" else "false" in
                                                res
                                              
                                        2. 3

                                          I had a similar experience, from “only one return” to “return early” And I think it depends on the domain and language you are using too.

                                          One project I worked on was initially written in C and then moved to C++ and started by people who mostly wrote java. There is a common pattern in C to use goto near the return statement to free memory when you exist (think of it as a defer in go but written by hand), and since goto‘s are the hallmark of bad programmers and returning early was not an option the dev’s came up with an ingenious pattern

                                          int result = -1;
                                          do {
                                            if (!condition) {
                                              break;
                                            }
                                            result = 1;
                                          } while (false)
                                          
                                          return result;
                                          

                                          it took a while to decipher why it was there but then become common place because your promotions were heavily influenced by your “coding capability”

                                        3. 2

                                          I think early exit is OK in the sense of basically guards against invalid inputs and your language lacks the ability to express it in other says - you know, C. (Probably the same for freeing resources at the end, since you don’t have finally or defer.)

                                          1. 2

                                            Strongly agree with you.

                                            I first came upon early returns as a recommended style in Go, under the name of “line of sight” (the article makes a good case for its benefits), and have since become a huge advocate, and use it in other languages as well.

                                            Knowing all your if-indented code represents a “non-standard” case makes the code really easy to scan and interpret. Aside from leading to flatter code, it leads to semantically uniform code.

                                            1. 1

                                              +1 on early exits. They’re clarifying: the first thing I do in many functions is check various corner cases and then early exit from them, freeing up my thought process for everything afterwards. Knowing that those corner cases no longer apply means I don’t have to worry that someone — and “someone” might even just be me, in the future — adds some code that doesn’t handle the corner cases well after the terminating } of the else block (or de-indent, or insert syntax here for closing an else block).

                                            1. 11

                                              This seems like a kind of arbitrary list that skips, among other things, iOS and Android, and that compares a list of technologies invented over ~40 years to a list that’s in its twenties.

                                              1. 7

                                                I noticed that Go was mentioned as a post-1996 technology but Rust was not, which strikes me as rather a big oversight! Granted at least some of the innovations that Rust made available to mainstream programmers predate 1996, but not all of them, and in any case productizing and making existing theoretical innovations mainstream is valuable work in and of itself.

                                                In general I agree that this is a pretty arbitrary list of computing-related technologies and there doesn’t seem to be anything special about the 1996 date. I don’t think this essay makes a good case that there is a great software stagnation to begin with (and for that matter, I happened to be reading this twitter thread earlier today, arguing that the broader great stagnation this essay alludes to is itself fake, an artifact of the same sort of refusal to consider as relevant all the ways in which technology has improved in the recent past).

                                                1. 2

                                                  It’s also worth noting that Go is the third or fourth attempt at similar ideas by an overlapping set of authors.

                                                  1. 1

                                                    The author may have edited their post since you read it. Rust is there now in the post-1996 list.

                                                  2. 3

                                                    I find this kind of casual dismissal that constantly gets voted up on this site really disappointing.

                                                    1. 2

                                                      It’s unclear to me how adding iOS or Android to the list would make much of a change to the author’s point.

                                                      1. 3

                                                        Considering “Windows” was on the list of pre-1996 tech, I think iOS/Android/touch-based interfaces in general would be a pretty fair inclusion of post-1996 tech. My point is that this seems like an arbitrary grab bag of things to include vs not include, and 1996 seems like a pretty arbitrary dividing line.

                                                        1. 2

                                                          I don’t think the list of specific technologies has much of anything to do with the point of how the technologies themselves illustrate bigger ideas. The article is interesting because it makes this point, although I would have much rather seen a deeper dive into the topic since it would have made the point more strongly.

                                                          What I get from it, and having followed the topic for a while, is that around 1996 it became feasible to implement many of the big ideas dreamed up before due to advancements in hardware. Touch-based interfaces, for example, had been tried in the 60s but couldn’t actually be consumer devices. When you can’t actually build your ideas (except in very small instances) you start to build on the idea itself and not the implementation. This frees you from worrying about the details you can’t foresee anyway.

                                                          Ideas freed from implementation and maintenance breed more ideas. So there were a lot of them from the 60s into the 80s. Once home computing really took off with the Internet and hardware got pretty fast and cheap, the burden of actually rolling out some of these ideas caught up with them. Are they cool and useful? In many cases, yes. They also come with side effects and details not really foreseen, which is expected. Keeping them going is also a lot work.

                                                          So maybe this is why it feels like more radical ideas (like, say, not equating programming environments with terminals) don’t get a lot of attention or work. But if you study the ideas implemented in the last 25 years, you see much less ambition than you do before that.

                                                          1. 2

                                                            I think the Twitter thread @Hail_Spacecake posted pretty much sums up my reaction to this idea.

                                                        2. 2

                                                          I think a lot of people are getting woosh’d by it. I get the impression he’s talking from a CS perspective. No new paradigms.

                                                          1. 3

                                                            Most innovation in constraint programming languages and all innovation in SMT are after 1996. By his own standards, he should be counting things like peer-to-peer and graph databases. What else? Quantum computing. Hololens. Zig. Unison.

                                                            1. 2

                                                              Jonathan is a really jaded guy with interesting research ideas. This post got me thinking a lot but I do wish that he would write a more thorough exploration of his point. I think he is really only getting at programming environments and concepts (it’s his focus) but listing the technologies isn’t the best way to get that across. I doubt he sees SMT solvers or quantum computing as something that is particularly innovative with respect to making programming easier and accessible. Unfortunately that is only (sort of) clear from his “human programming” remark.

                                                          2. 2

                                                            It would strengthen it. PDAs - with touchscreens, handwriting recognition (what ever happened to that?), etc. - were around in the 90s too.

                                                            Speaking as someone who only reluctantly gave up his Palm Pilot and Treo, they were in some ways superior, too. Much more obsessive focus on UI latency - especially on Palm - and far less fragile. I can’t remember ever breaking a Palm device, and I have destroyed countless glass screened smartphones.

                                                            1. 3

                                                              The Palm Pilot launched in 1996, the year the author claims software “stalled.” It was also created by a startup, which the article blames as the reason for the stall: “There is no room for technology invention in startups.”

                                                              They also didn’t use touch UIs, they used styluses: no gestures, no multitouch. They weren’t networked, at least not in 1996. They didn’t have cameras (and good digital cameras didn’t exist, and the ML techniques that phones use now to take good pictures hadn’t even been conceived of yet). They couldn’t play music, or videos. Everything was stored in plaintext, rather than encrypted. The “stall” argument, as if everything stopped advancing in 1996, just doesn’t really hold much water to me.

                                                              1. 1

                                                                The Palm is basically a simplified version of what already existed at the time, to make it more feasible to implement properly.

                                                        1. 26

                                                          Pro tip: this applies to you if you’re a business too. Kubernetes is a problem as much as it is a solution.

                                                          Uptime is achieved by having more understanding and control over the deployment environment but kubernetes takes that away. It attracts middle managers and CTOs because it seems like a silver bullet without getting your hands dirty but in reality it introduces so much chaos and indirections into your stack that you end up worse off than before, and all the while you’re emptying your pockets for this experience.

                                                          Just run your shit on a computer like normal, it’ll work fine.

                                                          1. 9

                                                            This is true, but let’s not forget that Kubernetes also has some benefits.

                                                            Self-healing. That’s what I miss the most with a pure NixOS deployment. If the VM goes down, it requires manual intervention to be restored. I haven’t seen good solutions proposed for that yet. Maybe uptimerobot triggering the CI when the host goes down is enough. Then the CI can run terraform apply or some other provisioning script.

                                                            Zero-downtime deployment. This is not super necessary for personal infrastructures but is quite important for production environments.

                                                            Per pod IP. It’s quite nice not to have to worry about port clashes between services. I think this can be solved by using IPv6 as each host automatically gets a range of IPs to play with.

                                                            Auto-scaling. Again not super necessary for personal infrastructure but it’s nice to be able to scale beyond one host, and not to have to worry on which host one service lives.

                                                            1. 6

                                                              Did anyone tried using Nomad for personal projects? It has self-healing and with the raw runner one can run executables directly on NixOS without needing any containers. I have not tried it myself (yet), but would be keen on hearing the experiences.

                                                              1. 3

                                                                I am experimenting with the Hashiscorp stack while off for the holidays. I just brought up a vagrant box (1GB ram) with Consul, Docker and Nomad runing (no jobs yet) and the overhead looks okay:

                                                                              total        used        free      shared  buff/cache   available
                                                                Mem:          981Mi       225Mi       132Mi       0.0Ki       622Mi       604Mi
                                                                Swap:         1.9Gi       7.0Mi       1.9Gi
                                                                

                                                                but probably too high to fit Postgres, Traefik or Fabio and a Rails app into it as well, but 2GB will probably be lots (I am kind of cheap so the less resources the better).

                                                                I have a side project running in ‘prod’ using Docker (for Postgres and my Rails app) along with Caddy running as a systemd service but it’s kind of a one off machine so I’d like to move towards something like Terraform (next up on the list to get running) for bring up and Nomad for the reasons you want something like that.

                                                                But… the question that does keep running through the back of my head, do I need even Nomad/Docker? For a prod env? Yes, it’s probably worth the extra complexity and overhead but for personal stuff? Probably not… Netlify, Heroku, etc are pretty easy and offer free tiers.

                                                                1. 1

                                                                  I was thinking about doing this but I haven’t done due diligence on it yet. Mostly because I only have 2 droplets right now and nobody depends on what’s running on them.

                                                                2. 1

                                                                  If you’re willing to go the Amazon route, EC2 has offered most of that for years. Rather than using the container as an abstraction, treat the VM as a container: run one main process per VM. And you then get autoscaling, zero downtime deploys, self-healing, and per-VM IPs.

                                                                  TBH I think K8s is a step backwards for most orgs compared to just using cloud VMs, assuming you’re also running K8s in a cloud environment.

                                                                  1. 2

                                                                    That’s a good point. And if you don’t care about uptime too much, autoscaling + spot instances is a pretty good fit.

                                                                    The main downside is that a load-balancer is already ~15.-/month if I remember correctly. And the cost can explode quite quickly on AWS. It takes quite a bit of planning and effort to keep the cost super low.

                                                                3. 5

                                                                  IMO, Kubernetes’ main advantage isn’t in that it “manages services”. From that POV, everything you say is 100% spot-on. It simply moves complexity around, rather than reducing it.

                                                                  The reason I like Kubernetes is something entirely different: It more or less forces a new, more robust application design.

                                                                  Of course, many people try to shoe-horn their legacy applications into Kubernetes (the author running git in K8s appears to be one example), and this just adds more pain.

                                                                  Use K8s for the right reasons, and for the right applications, and I think it’s appropriate. It gets a lot of negative press for people who try to use it for “everything”, and wonder why it’s not the panacea they were expecting.

                                                                  1. 5

                                                                    I disagree that k8s forces more robust application design; fewer moving parts are usually a strong indicator of reliability.

                                                                    Additionally, I think k8s removes some of the pain of microservices–in the same way that a local anathestic makes it easier to keep your hand in boiling water–that would normally help people reconsider their use.

                                                                  2. 5

                                                                    And overhead. Those monster yaml files are absurd in so many levels.

                                                                    1. 2

                                                                      Just run your shit on a computer like normal, it’ll work fine.

                                                                      I think that’s an over-simplification. @zimbatm’s comment makes good points about self-healing and zero-downtime deployment. True, Kubernetes isn’t necessary for those things; an EC2 auto-scaling group would be another option. But one does need something more than just running a service on a single, fixed computer.

                                                                      1. 3

                                                                        But one does need something more than just running a service on a single, fixed computer.

                                                                        I respectfully disagree…worked at a place which made millions over a few years with a single comically overloaded DO droplet.

                                                                        We eventually made it a little happier by moving to hosted services for Mongo and giving it a slightly beefier machine, but otherwise it was fine.

                                                                        The single machine design made things a lot easier to reason about, fix, and made CI/CD simpler to implement as well.

                                                                        Servers with the right provider can stay up pretty well.

                                                                        1. 2

                                                                          Servers with the right provider can stay up pretty well.

                                                                          I was one of the victims of the DDOS that hit Linode on Christmas day (edit: in 2015; didn’t mean to omit that). DO and Vultr haven’t had perfect uptime either. So I’d rather not rely on single, static server deployments any more than I have to.

                                                                          1. 2

                                                                            I don’t see how your situation/solution negates the statement.

                                                                            You’ve simply traded one “something” (Kubernetes) with another (“the right provider”, and all that entails–probably redundant power supplies, network connections, hot-swappable hard drives, etc, etc).

                                                                            The complexity still exists, just at a different layer of abstraction. I’ll grant you that it does make reasoning about the application simpler, but it makes reasoning about the hardware platform, and peripheral concerns, much more complex. Of course that can be appropriate, but it isn’t always.

                                                                            I’m also unsure how a company’s profit margin figures into a discussion about service architectures…

                                                                            1. 5

                                                                              I’m also unsure how a company’s profit margin figures into a discussion about service architectures…

                                                                              There is no engineering without dollar signs in the equation. The only reason we’re being paid to play with shiny computers is to deliver business value–and while I’m sure a lot of “engineers” are happy to ignore the profit-motive of their host, it is very unwise to do so.

                                                                              I’ll grant you that it does make reasoning about the application simpler, but it makes reasoning about the hardware platform, and peripheral concerns, much more complex.

                                                                              That engineering still has to be done, if you’re going to do it at all. If you decide to reason about it, do you want to be able to shell into a box and lay hands on it immediately, or hope that your k8s setup hasn’t lost its damn mind in addition to whatever could be wrong with the app?

                                                                              You’ve simply traded one “something” (Kubernetes) with another (“the right provider”, and all that entails–probably redundant power supplies, network connections, hot-swappable hard drives, etc, etc).

                                                                              The complexity of picking which hosting provider you want to use (ignoring colocation issues) is orders and order of magnitudes less than learning and handling k8s. Hosting is basically a commodity at this point, and barring the occasional amazingly stupid thing among the common names there’s a baseline of competency you can count on.

                                                                              People have been sold this idea that hosting a simple server means racking it and all the craziness of datacenters and whatnot, and it’s just a ten spot and an ssh key and you’re like 50% of the way there. It isn’t rocket surgery.

                                                                            2. 1

                                                                              can you share more details about this?

                                                                              I’ve always been impressed by teams/companies maintaining a very small fleet of servers but I’ve never heard of any successful company running a single VM.

                                                                              1. 4

                                                                                It was a boring little Ubuntu server if I recall correctly, I think like a 40USD general purpose instance. The second team had hacked together an impressive if somewhat janky system using the BEAM ecosystem, the first team had built the original platform in Meteor, both ran on the same box along with Mongo and supporting software. The system held under load (mostly, more about that in a second), and worked fine for its role in e-commerce stuff. S3 was used (as one does), and eventually as I said we moved to hosted options for database stuff…things that are worth paying for. Cloudflare for static assets, eventually.

                                                                                What was the business environment?

                                                                                Second CTO and fourth engineering team (when I was hired) had the mandate to ship some features and put out a bunch of fires. Third CTO and fifth engineering team (who were an amazing bunch and we’re still tight) shifted more to features and cleaning up technical debt. CEO (who grudgingly has my respect after other stupid things I’ve seen in other orgs) was very stingy about money, but also paid well. We were smart and well-compensated (well, basically) developers told to make do with little operational budget, and while the poor little server was pegged in the red for most of its brutish life, it wasn’t drowned in bullshit. CEO kept us super lean and focused on making the money funnel happy, and didn’t give a shit about technical features unless there was a dollar amount attached. This initially was vexing, but after a while the wisdom of the approach became apparent: we weathered changes in market conditions better without a bunch of outstanding bills, we had more independence from investors (for better or worse), and honestly the work was just a hell of a lot more interesting due in no small part to the limitations we worked under. This is key.

                                                                                What problems did we have?

                                                                                Support could be annoying, and I learned a lot about monitoring on that job during a week where the third CTO showed me how to setup Datadog and similar tooling to help figure out why we had intermittent outages–eventual solution was a cronjob to kill off a bloated process before it became too poorly behaved and brought down the box. The thing is, though, we had a good enough customer success team that I don’t think we even lost that much revenue, possibly none. That week did literally have a day or two of us watching graphs and manually kicking over stuff just in time, which was a bit stressful, but I’d take a month of that over sitting in meetings and fighting matrix management to get something deployed with Jenkins onto a half-baked k8s platform and fighting with Prometheus and Grafana and all that other bullshit…as a purely random example, of course. >:|

                                                                                The sore spots we had were basically just solved by moving particular resource-hungry things (database mainly) to hosting–the real value of which was having nice tooling around backups and monitoring, and which moving to k8s or similar wouldn’t have helped with. And again, it was only after a few years of profitable growth that it traffic hit a point where that migration even seemed reasonable.

                                                                                I think we eventually moved off of the droplet and onto an Amazon EC2 instance to make storage tweaks easier, but we weren’t using them in any way different than we’d use any other barebones hosting provider.

                                                                                1. 4

                                                                                  Did that one instance ever go completely down (becoming unreachable due to a networking issue also counts), either due to an unforeseen problem or scheduled maintenance by the hosting provider? If so, did the company have a procedure for bringing a replacement online in a timely fashion? If not, then I’d say you all just got very lucky.

                                                                                  1. 1

                                                                                    Yes, and yes–the restart procedure became a lot simpler once we’d switched over to EC2 and had a hot spare available…but again, nothing terribly complicated and we had runbooks for everything because of the team dynamics (notice the five generations of engineering teams over the course of about as many years?). As a bonus, in the final generation I was around for we were able to hire a bunch of juniors and actually teach them enough to level them up.

                                                                                    About this “got very lucky” part…

                                                                                    I’ve worked on systems that had to have all of the 9s (healthcare). I’ve worked on systems, like this, that frankly had a pretty normal (9-5, M-F) operating window. Most developers I know are a little too precious about downtime–nobody’s gonna die if they can’t get to their stupid online app, most customers–if you’re delivering value at a price point they need and you aren’t specifically competing on reliability–will put up with inconvenience if your customer success people treat them well.

                                                                                    Everybody is scared that their stupid Uber-for-birdwatching or whatever app might be down for a whole hour once a month. Who the fuck cares? Most of these apps aren’t even monetizing their users properly (notice I didn’t say customers), so the odd duck that gets left in the lurch gets a hug and a coupon and you know what–the world keeps turning!

                                                                                    Ours is meant to be a boring profession with simple tools and innovation tokens spent wisely on real business problems–and if there aren’t real business problems, they should be spent making developers’ lives easier and lowering business costs. I have yet to see k8s deliver on any of this for systems that don’t require lots of servers.

                                                                                    (Oh, and speaking of…is it cheaper to fuck around with k8s and all of that, or just to pay Heroku to do it all for you? People are positively baffling in what they decide to spend money on.)

                                                                                  2. 1

                                                                                    eventual solution was a cronjob to kill off a bloated process before it became too poorly behaved and brought down the box … That week did literally have a day or two of us watching graphs and manually kicking over stuff just in time, which was a bit stressful,…

                                                                                    It sounds like you were acting like human OOM killers, or more generally speaking manual resource limiters of those badly-behaved processes. Would it be fair to say that sort of thing would be done today by systemd through its cgroups resource management functionality?

                                                                                    1. 1

                                                                                      We probably could’ve solved it through systemd with Limit* settings–we had that available at the time. For us, we had some other things (features on fire, some other stuff) that took priority, so just leaving a dashboard open and checking it every hour or two wasn’t too bad until somebody had the spare cycles to do the full fix.

                                                                          1. 3

                                                                            This is my own bias speaking, but when I see “AI,” I usually expect a morass of machine learning. With the problem statement — “make a convenient checkout flow” — I almost didn’t read the rest of the article, since using ML for that seemed like trying to swat a fly with a nuclear submarine.

                                                                            But it’s not! It’s using a clever, elegant old technique that I was only familiar with being used in videogame programming, and that has nothing to do with ML. I was surprised and delighted.

                                                                            1. 5

                                                                              Today, the heart of operating system security on most PCs lives in a chip separate from the CPU, called the Trusted Platform Module […] The Pluton design removes the potential for that communication channel to be attacked by building security directly into the CPU. Windows PCs using the Pluton architecture will first emulate a TPM

                                                                              Waaaait a second. The TPM on most PCs in the last few years is not a discrete chip. fTPM is much more common at this point.

                                                                              Now, it does seem like they want a more “hard” TPM rather than just in-firmware:

                                                                              technology that helps ensure keys are never exposed outside of the protected hardware, even to the Pluton firmware itself

                                                                              ..cool? I guess?


                                                                              Amazing yet suspicious that all the CPU vendors are on board with just adding a Microsoft-designed part into their chips.

                                                                              1. 2

                                                                                Amazing yet suspicious that all the CPU vendors are on board with just adding a Microsoft-designed part into their chips.

                                                                                Why is that? Windows is probably the target OS for a majority of CPUs marketed towards computers.

                                                                                1. 1

                                                                                  Yeah — AMD and Intel certainly aren’t pinning their hopes on macOS, at the very least.

                                                                                  Qualcomm probably cares more currently about Android, but sees this as a growth opportunity into the PC market.

                                                                              1. 4

                                                                                Mentions Facebook as Thrift’s only notable user but certainly Twitter is another one. Not quite sure if Pinterest still has Thrift speaking services, but they used to a few years ago.

                                                                                1. 4

                                                                                  Airbnb also uses Thrift. Claiming no one outside of FB uses Thrift seems pretty uninformed…

                                                                                  1. 2

                                                                                    Thanks to you both, added!

                                                                                  2. 1

                                                                                    My company has legacy services in Thrift. The main cons vs. gRPC/Protobuffers are:

                                                                                    • no streaming RPC; messages have to be read fully into memory on the server before processing can begin
                                                                                    • no authentication built into the protocol, no good way to do access control or access logging

                                                                                    The main advantage over Protobuffers is that in Thrift, exceptions are part of the typed RPC IDL, and then form part of the API. This is nicer than setting and checking gRPC context status codes and string details.

                                                                                    1. 2

                                                                                      no streaming RPC; messages have to be read fully into memory on the server before processing can begin no authentication built into the protocol, no good way to do access control or access logging

                                                                                      The section about Thrift gets one major thing right: “Apache is the tragic junkyard of open source projects”.

                                                                                      Both of the issues you point out above are solved problems for Thrift at FB either integral to the protocol (e.g. streaming) or via the environment (e.g. mutual TLS authentication for point-to-point and CATs for end-to-end, contextual authentication)

                                                                                  1. 4

                                                                                    Be careful: if you connect to public WiFi, turn off public access of your X session!

                                                                                    MS is apparently working on a fix so that you don’t need to do this (and is also working on a custom Wayland compositor that will integrate with Windows’ native shell, so long-term you won’t even need to run a separate X server). But for now it’s a bit of a risk, because you can forget to turn it off if/when you connect to public networks. Personally I stopped using an X server with WSL after they released WSL2 (since that’s when you started needing to enable public access) and switched to running everything in the Windows Terminal (which is excellent), because I don’t quite trust myself. That being said I’m fairly comfortable with running everything in terminals instead of using GUIs so YMMV on that front; ironically, the only thing I was using X for on Windows was running urxvt, and the Windows Terminal has gotten good enough that I don’t feel the need to do that anymore.

                                                                                    1. 1

                                                                                      Be careful: if you connect to public WiFi, turn off public access of your X session!

                                                                                      Yeah, that’s a fair point. My Windows computer is a desktop, though, so that’s not really a concern for me.

                                                                                      MS is apparently working on a fix so that you don’t need to do this (and is also working on a custom Wayland compositor that will integrate with Windows’ native shell, so long-term you won’t even need to run a separate X server).

                                                                                      I can only hope that’s true!

                                                                                      Personally I stopped using an X server with WSL after they released WSL2 (since that’s when you started needing to enable public access) and switched to running everything in the Windows Terminal (which is excellent), because I don’t quite trust myself.

                                                                                      I’ve considered this myself, but when I saw how good Emacs looked over X410 I reconsidered. Still, I have to say I’m impressed with the progress MS have made with respect to the Terminal - now all it needs is the ability to use it in dropdown mode bound to a global hotkey. :-)

                                                                                      1. 2

                                                                                        Ah, yeah if you’re on a desktop then it’s not much of a concern. I use a Surface Book laptop, so I was more nervous.

                                                                                        I am pretty stunned at how much progress has happened on WSL2 overall. Really never expected to go back to Windows…

                                                                                        I can only hope that’s true!

                                                                                        It is! Here’s a blog post from two days ago where MS says that Insider builds should get access within “the next couple of months” — https://devblogs.microsoft.com/commandline/whats-new-in-the-windows-subsystem-for-linux-september-2020/#gui-apps

                                                                                        1. 1

                                                                                          Thanks for sharing!

                                                                                          I am pretty stunned at how much progress has happened on WSL2 overall. Really never expected to go back to Windows…

                                                                                          Same here!

                                                                                    1. 2

                                                                                      when the user is resizing the window […] the app, which gets a notification of the size change […] the window manager, which is tasked with rendering the “chrome” around the window, drop shadows, and so on

                                                                                      This is why GTK/GNOME were 200% right to go all-in on client-side decorations and relegate server-side decorations to “legacy compatibility junk” status.

                                                                                      the compositor could be run to race the beam

                                                                                      Wouldn’t that just be Weston’s repaint scheduling algorithm that was even linked to in the article?

                                                                                      I’m not sure how literal “race the beam” would make sense in a compositor context. It’s not like the GPU gives feedback (and opportunity to stop everything) per scanline?

                                                                                      1. 6

                                                                                        This is why GTK/GNOME were 200% right to go all-in on client-side decorations and relegate server-side decorations to “legacy compatibility junk” status.

                                                                                        In short, just no. Regardless of their motivation, that specific case is doesn’t factor in when you make that design decision. There is minor justification for the topbar contents, though it could still be solved better with a hybrid approach. Border etc. can be made trivially and predictably faster on the server side. The aversion from the GNOME case comes because they have no practical experience with actual SSDs, just PTS (post-traumatic or presentation-time? stress) from the insanity that you have to do in X to achieve a somewhat similar effect.

                                                                                        At higher refresh rates and bitdepths, SSDs wins out as shadow + border can be done entirely in shader on GPU during the final composition while the shadow stage alone can cost milliseconds elsewhere in the chain. Heck if the client stops decorating you can even compensate for the client not keeping up by keeping the decorations reacting smoothly and estimate the contents in the crop or expand area.

                                                                                        From from the Wayland angle, the dance you have to do with synchronized subsurfaces to achieve the drag resize effect with CSDs (which applies to clients with mixed-origin contents where the contents are already costly) is expensive, complicated, error prone and nobody has gotten it right and efficient for all invariants.

                                                                                        Wouldn’t that just be Weston’s repaint scheduling algorithm that was even linked to in the article?

                                                                                        That is specifically what I referred to above as shooting yourself in the foot in the post above, and I have the same bullet holes from long ago. Look closely at the “for lols” case here for instance (where also, there are no resize synch issues with SSDs, it worked on a Rasperry Pi and that was 2013..) https://youtu.be/3O40cPUqLbU?t=94 - that overlay in the talespin nes emulation window…

                                                                                        1. The extra sleep to give the client some time to react only works for cheap clients, with the wrong fonts and higher end densities today and even terminal emulators fail to pass as ‘cheap’ then.
                                                                                        2. The tactic blows up if both consumer and producer utilises the same strategy as now you converge on jittering around the deadline.
                                                                                        3. Swap-with-tear-control sort-of avoids 2, but now frames are even less perfect.

                                                                                        Edit (missed this):

                                                                                        I’m not sure how literal “race the beam” would make sense in a compositor context. It’s not like the GPU gives feedback (and opportunity to stop everything) per scanline?

                                                                                        There is a fun book on atari graphics called specifically racing the beam that is recommended reading. Also per-scan line effects were common in a bunch of graphics systems, see how the amiga implemented its “drag statusbar to reveal client” effect for instance. Or even see it happen: https://www.youtube.com/watch?v=RagLKuQBlsw

                                                                                        Regardless, the timing constraints to actually update per scanline are quite lenient as long as you start at the right times (lines / refresh-rate), the contents need to just be more deterministic, not asynchronous lies as anything GPU synthesised is. If it is just about composition, X can do it still. It’s when multiple producers work on the same buffer that you get a ticket to the tear fest. There are prototype branches in Arcan somewhere that does it for shm like contents somewhere still, and it’ll come back for text-only clients specifically.

                                                                                        1. 2

                                                                                          SSDs wins out as shadow + border can be done entirely in shader on GPU during the final composition

                                                                                          Well it could also be done on GPU in the client, together with the rest of the window contents, as GTK4 does.

                                                                                          Yeah, yeah, current gtk3 apps use CPU and would benefit from compositor side shadows, but the big GTK4 transition is coming :D

                                                                                          clients with mixed-origin contents where the contents are already costly

                                                                                          Hmm, I don’t know any situation where CSD would be the only thing forcing the usage of subsurfaces.

                                                                                          Even in a simple video player, where you would be happy to just present a VAAPI-decoded surface and let the compositor decorate it… whoops, you wouldn’t want to leave the player without any UI controls, right? So you either have to overlay the UI with subsurfaces (and sync its resizing!) or composite it with the video yourself in GL. And then you handle CSD the same way.

                                                                                          The extra sleep to give the client some time to react only works for cheap clients

                                                                                          Depends on how much time in the frame budget you reserve for the compositor. A simple one like Sway should always render within 1-2ms – then “cheap” is “not pushing against the limit of the frame budget”. This is much harder for us in Wayfire since we like to wobble our windows and whatnot :)

                                                                                          High refresh rate monitors make this harder – 1ms out of 8 is a more significant chunk than out of 16 – but they also kinda reduce the whole need for this kind of trick since the latency between the frames themselves is lower.

                                                                                          not asynchronous lies as anything GPU synthesised is

                                                                                          But that’s what I’m talking about – the fact that everything is async GPU stuff these days!

                                                                                          1. 3

                                                                                            Well it could also be done on GPU in the client, together with the rest of the window contents, as GTK4 does.Yeah, yeah, current gtk3 apps use CPU and would benefit from compositor side shadows, but the big GTK4 transition is coming :D

                                                                                            Except you are still wasting memory bandwidth with the extra buffer space, you are drawing shadows that can be occluded or clipped at composition, and should take care that the size of the shadow + border does not push the contents outside of tile boundaries. Not to mention (since we can talk wayland) the wl_region annotation for marking the region as translucent, forcing the compositor side to slice it up into smaller quads so you don’t draw the entire region with alpha blending state enabled. Then we come to the impact of this for compression. Lastly, since visual flair boundaries should be pushed, the dream of raytraced 2D radiosity lighting dies…

                                                                                            Regardless, I will keep on doing this to gtk: https://videos.files.wordpress.com/vSNj5b5R/impostors_dvd.mp4 - though practically the only toolkit application I still use is binary ninja/ida and that’s thankfully Qt. Though having that also lets me do this https://gfycat.com/angelicjointamericanrobin so hey..

                                                                                            Hmm, I don’t know any situation where CSD would be the only thing forcing the usage of subsurfaces. Even in a simple video player, where you would be happy to just present a VAAPI-decoded surface and let the compositor decorate it… whoops, you wouldn’t want to leave the player without any UI controls, right? So you either have to overlay the UI with subsurfaces (and sync its resizing!) or composite it with the video yourself in GL. And then you handle CSD the same way.

                                                                                            No sure you have things like the GTK3 abuse of that for their pseudo-popups as well. I have a beef with subsurfaces as the way out for CSDs specifically (well and how much complexity it adds to the wayland implementation itself), not for clipped embedding (though hey, subsurfaces aren’t supposed to be clipped according to spec..).

                                                                                            Depends on how much time in the frame budget you reserve for the compositor. A simple one like Sway should always render within 1-2ms – then “cheap” is “not pushing against the limit of the frame budget”. This is much harder for us in Wayfire since we like to wobble our windows and whatnot :)

                                                                                            I see your https://github.com/WayfireWM/wayfire/blob/master/plugins/wobbly/wobbly.cpp and raise with https://github.com/letoram/durden/blob/master/durden/tools/flair/cloth.lua#L17 - clothy windows are obviously superior to wobbly. https://videos.files.wordpress.com/zmiBKUyQ/snow_dvd.mp4 - also note how they behave differently based on decorated or not because hey, SSDs. But that is years ago, wait until you see the next thing in I have in the pipeline (which relies even more on SSDs to even be functional)..

                                                                                            High refresh rate monitors make this harder – 1ms out of 8 is a more significant chunk than out of 16 – but they also kinda reduce the whole need for this kind of trick since the latency between the frames themselves is lower.

                                                                                            No what I mean is specifically for the quality of these operations as the jitter becomes even more noticeable as animations go from smooth to blergh. It is more likely to happen in the drag- resize case rather than steady state from the storm of reallocation, cache/mipmap invalidation, … There’s a lot more to that scenario when you factor in wayland but wall of text is already there.

                                                                                            But that’s what I’m talking about – the fact that everything is async GPU stuff these days!

                                                                                            SVG and text rendering wants a word, NV_Path_Trace won’t happen, remember?. That the processing is async doesn’t mean that the rest of your display system is allowed to just throw its hands into the air, you need to get much more clever with scheduling to not lag further behind and implicit synch has poisoned so much here. Fences should’ve been the norm 7+ years ago and they still aren’t generally possible.

                                                                                          2. 1

                                                                                            Does “racing the beam” work on non-CRT displays? My understanding of “racing the beam” is that you’d race a single, physical electron gun that was quickly firing at the screen back and forth in a series of lines (the “scanlines”). But LED and OLED displays, for example, don’t have the same notion of scanlines. Is there an equivalent deterministic race you can play with them? From my (limited) understanding of LED refresh tech it seems hard to pull off, especially since LED screens have all sorts of magic happening behind the scenes that varies by manufacturer, e.g. frame interpolation or black frame insertion, that could potentially mess with the timing… But I’m not a graphics engineer so I don’t know a whole lot about this topic.

                                                                                            1. 2

                                                                                              Not in its traditional form no, you are racing the buffer that is being scanned out rather than the display itself, so it is more ‘drawing to the front buffer, hoping that whatever scanout engine reads it linearly (and some don’t, certain samsung phones comes to mind).

                                                                                              1. 2

                                                                                                Sort of, yes. Instead of racing the electron beam, you’re racing the HDMI/DisplayPort stream, rendering pixels just in time to send them down the wire.

                                                                                          1. 2

                                                                                            Check out all the great small apps like Little Snitch, Magnet, Flycut, Little Ipsum, etc. People have good lists online. Then mentally prepare yourself for your keyboard to break.

                                                                                            1. 1

                                                                                              The 2020 MBP should have the old, pre-butterfly keyboard switches again, which were very reliable for me back when I used them. Luckily the days of “prepare yourself for your keyboard to break” should be over!

                                                                                              1. 1

                                                                                                Good news

                                                                                                1. 1

                                                                                                  I’ve got a 2019, and they really did fix the keyboards. Feels much better than my pre-butterfly Mac as well. I’d call it a new generation rather than reviving the old gen.

                                                                                              1. 2

                                                                                                Going with Ruby for this, since it’s the most-OO language that I’m familiar with.

                                                                                                Like: the Ruby Pathname class. It’s just syntactically so much nicer to path_a.join(path_b) than to awkwardly write out File.join(path_a, path_b) every time. Ditto with pretty much any operation you want to perform on paths (e.g. convert relative to absolute). Treating paths as distinct objects just ends up feeling more ergonomic than treating them as raw strings or bytes. It also helps that none of the Pathname methods mutate state; they all return new Pathnames rather than modifying in place.

                                                                                                Dislike: the File/FileUtils separation. It’s needlessly awkward; for example, you can create and delete files with File, but to move or copy a file you need to use FileUtils. There is duplicated functionality, e.g. chown existing in both. The split feels fairly meaningless and it’s annoying in practice.

                                                                                                1. 3

                                                                                                  path_a / path_b is even nicer and is portable too.

                                                                                                  1. 2

                                                                                                    If we’re doing Ruby, to_enum is weird and wonderful and fantastic.

                                                                                                    Bends one’s mind a little before you grok it… but once you do…

                                                                                                  1. 3

                                                                                                    I find the Windows Terminal pretty unusable because of the bug where it doesn’t support scroll events with trackpads; I have a Surface Book, so my “mouse” is just the built in trackpad. Sadly it’s tagged as the lowest possible priority to fix (as well as “Help Wanted,” which I assume means they hope an open source contributor will fix it instead of assigning a full-time MS employee), so I don’t have high hopes of being able to use the Windows Terminal any time soon.

                                                                                                    Seems nice for desktop users though; it’s quite fast.

                                                                                                    1. 5

                                                                                                      Do we have a lot of Lobsters on Windows? I keep debating doing a “Windows is Not Unix” guide for people who are moving to Windows + WSL2, but I keep convincing myself there’s no interest. The fact this is so high on the front page makes me wonder if I’m wrong.

                                                                                                      1. 4

                                                                                                        I use a Surface Book pretty regularly and run WSL2 on it. Although I also have a Linux desktop and a work-issued Macbook Pro.

                                                                                                        1. 2

                                                                                                          I use Windows at work.

                                                                                                          1. 2

                                                                                                            Ditto.

                                                                                                          2. 2

                                                                                                            I switched full-time to Windows about a year ago. I avoid WSL as much as possible, even going so far as to send PRs to open source projects that assume a Linux environment. I actually find it quite rewarding!

                                                                                                            1. 2

                                                                                                              I do that but for BSD instead of Windows ;)

                                                                                                            2. 2

                                                                                                              I’m interested. I use Linux as my primary desktop computing environment, but some things I do some of the time (e.g. building games for Unity) go better on Windows. And every time I try to adopt Windows for a few days, I’m a bit stymied. I feel like so many people use this, there must be something wrong with my workflow that’s making it feel so wrong to me.

                                                                                                              This article is helpful and kind of sniffs around the edges. But I’d really be interested to learn more about what a really “Windows Native” workflow looks like and what people who use that all the time consider appealing about it. So if you post it, I hope it gets some attention here. Because even if I won’t ever be a “Windows person” I think I’m making myself suffer needlessly when I do use it, and I’d like to be educated about how to be more effective on it.

                                                                                                              1. 1

                                                                                                                For what it’s worth, I think your experience is very common (which is why ‘Windows’ gets followed immediately by ‘WSL’.)

                                                                                                                As far as I can tell - and for no good reason - Linux users have a good command line and tend to use it, and Windows developers end up with a good IDE and tend to use it, and in both cases the thing not being used is neglected. So switching platforms is really about switching cultures and embracing a very different workflow. People going in both directions see the new platform as “inferior” because they’re judging the quality of tools used to implement their previous workflow.

                                                                                                                As someone who really loves the command line, Windows has often felt like it required me to “suffer needlessly.” Personally I ended up writing my own command processor to incorporate a few things that the Linux command line environment “got right” and it ended up being extended with many interactive things. I haven’t felt like I’m suffering or missing out by using Windows after that, although clearly there’s a personal bias since those tools are designed for my workflow. If you don’t like mine, then it pays to learn Powershell, which is also a big cultural adjustment since the basic idioms are very different to textual Posix shells.

                                                                                                                To me the most “appealing” part of being a Windows developer is WinDbg, which also has a steep learning curve but it’s far better than anything I’ve used on Linux and is much more capable than Visual Studio for systems level debugging. It’s one of those “best kept secret” type things that’s easy to get - once you know it exists and that it makes serious development much easier.

                                                                                                              2. 1

                                                                                                                I’ve recently committed Linux apostasy :) partly on account of WSL2, and I’d definitely be interested in that. I haven’t used Windows, except to run putty to log in to a remote server, in a very, very long time (in fact, save for a brief period between 2011 and 2012, when I worked on a cross-platform tool, I haven’t really used it in 15+ years, the last version I used regularly was Windows 2000). I’m slowly discovering what’s changed since then but 15 years is a lot of time…

                                                                                                                1. 1

                                                                                                                  I’m on Windows and routinely browse lobsters by looking at the ‘windows’ tag.

                                                                                                                  That said, I really don’t understand the point of moving to Windows to then run WSL2. Using WSL2 is using a Linux VM, and for the most part, the issues there are not about Windows. If you want a Linux VM, you can run one on any platform…

                                                                                                                  1. 4

                                                                                                                    Well, it’s a damn convenient VM :). I’ve tried it as an experiment for a month or so – I’m now on an “extended trial” of sorts, I guess, for 6 months (backups over the network, all Linux-readable – if I ever want to go back I just have to install Linux again). I use it for two reasons:

                                                                                                                    • It’s a good crutch – I’m sure everything I could do under Linux can be just as easily be done under Windows but 20 years of muscle memory don’t get erased overnight. It lets me do things I haven’t had time to figure out how to efficiently do under Windows in a manner that I’m familiar with. I could do all that with a VM, too, but it’s way better when it works out of the box and boots basically instantaneously.
                                                                                                                    • It gets me all the good parts of Linux – from mutt to a bunch of network/security tools and from ksh to my bag of scripts – without any of the bad parts. Of course, that can also be done with a VM – but like I said, this one works out of the box and boots instantaneously :).

                                                                                                                    I don’t use it too much – in the last two weeks I don’t think it’s seen six hours of use. I fire it up to read some mailing lists (haven’t yet found anything that handles those as well as mutt – even Pegasus Mail isn’t as cool as I remember it…), and I fiddle with one project inside it, mostly because it’s all new to me, and I want to work on it in a familiar environment (learning two things at a time never goes well).

                                                                                                                    I still haven’t made up my mind on keeping it, we’ll see about that in six months, but I’m pretty sure I’d have never even considered the experiment without WSL2.

                                                                                                                  2. 1

                                                                                                                    My monitor + video card setup is aimed at gaming and is almost entirely unusable on Linux period. I’m looking at replacing the graphics card in the future, but until that happens, Windows is the only tolerable OS.

                                                                                                                    1. 1

                                                                                                                      I like to play video games, which almost always means “you need windows”.

                                                                                                                      I used to dual boot Debian but nowadays I tend to just ssh into a cloudish instance thing that I use as a kind of remote workstation. That combined with vscode’s wonderful remote support means outside of work most of my personal-hacking-stuff still takes place through the window of… well, windows.

                                                                                                                      1. 1

                                                                                                                        My work/play machine at home is Windows. Always has been. (Because games, obviously, and I don’t like dual boot.)

                                                                                                                        I detest working on Windows though and WSL hasn’t improved anything for me yet, that’s why I still do most of my work via PuTTY on a linux box if possible. (work means any ops/programming work I do in my free time).

                                                                                                                        Work machine is Linux and I’m actually glad I couldn’t work on Windows so no one can push me :P

                                                                                                                        1. 1

                                                                                                                          I reluctantly use Windows, but I try to avoid it. There’s rarely anything I want to use that is only available on Windows with no viable alternative. Still, there are times I have to use it. I haven’t bothered with WSL2 but I would still read something like this if it existed.

                                                                                                                          1. 1

                                                                                                                            I use Windows at work (though work is Microsoft Research, so that’s not very surprising). I use WSL1 and a FreeBSD VM. I don’t think WSL2 yet has the nice PTY and pipe integration in WSL1. With WSL, I can run cmd.exe in a *NIX shell (including something like Konsole in VcXsrv) and it works. More usefully, I can run the Visual Studio Command Prompt batch file to get a Windows build environment from my WSL environment. If a Windows program creates a named pipe in the WSL-owned part of the filesystem namespace, it is a Linux named pipe.

                                                                                                                          1. -1

                                                                                                                            Can anyone recommend some material describing concrete motivations for adding generics to Go? I’m aware of the abstract idea that you need generics in order to build data structures that work with different types, but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g? My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.

                                                                                                                            1. 19

                                                                                                                              I hope it’s not that hard to imagine wanting different data structures than hash maps; maybe your problem fits better into a binary search tree for example.

                                                                                                                              Well, I for one usually don’t feel like implementing my own red-black tree, so I would like to just grab a library. That library will be much nicer to use if I can just make an RBTree<string, MyFoo>. I certainly wouldn’t want to copy/paste an int->string red-black tree into some file(s) and judiciously search/replace until I have a string->MyFoo tree (and then do the same when I need an int->float tree).

                                                                                                                              1. 0

                                                                                                                                That makes sense, but I am still looking for some grounding in actual programming practice. Is there a use of a red-black tree that would not warrant customizing it for the storage type? Or one where it would make sense to add a library dependency rather than copying the RB tree code?

                                                                                                                                1. 8

                                                                                                                                  How do you write a library that provides a Red-Black tree that can in principle work with many different client types without generics? This isn’t a rhetorical question, I don’t know Go and I genuinely don’t know how you would implement this kind of library in Go without generics.

                                                                                                                                  1. 6

                                                                                                                                    Go’s sync.Map (concurrent hashmap) is an actual real world example of this, and it uses interface{}, akin to Java’s Object.

                                                                                                                                    1. 25

                                                                                                                                      Right, that’s a great example. Because it uses interface{}, it:

                                                                                                                                      • Requires all keys and values to be heap allocated, leading to worse performance, worse memory usage, and worse memory fragmentation. Requiring two heap-allocated ints to store one value in an int->int concurrent hash map is unacceptable for many uses.
                                                                                                                                      • Is less ergonomic, requiring a cast every time you want to use a value.
                                                                                                                                      • Provides no type safety. (I imagine this one will be the least convincing to Go programmers, since Go generally expects the programmer to just not make mistakes)
                                                                                                                                      1. 4

                                                                                                                                        This brings me back to C++Builder 3 back in the 90s. To use a list, you had to create a class derived from a kind of TItem class to be able to store things. Why anyone would want to go back to that in productive code boggles my mind.

                                                                                                                                        1. 1

                                                                                                                                          I’m using a sync.Map (for its concurrency support - I have many goroutines writing map entries, and another goroutine periodically ranging over the entire map to dump it to a json file).

                                                                                                                                          However I know the types I write to the map, I have no need for interface{}.

                                                                                                                                          Am I better off with a real typed map + using sync.RWLock/mutex/etc. directly (in a custom struct)? Performance-wise.

                                                                                                                                          1. 1

                                                                                                                                            I don’t know, you would have to benchmark or measure CPU or memory usage. The sync.Map documentation suggests that using a regular map + mutexes could be better though: https://golang.org/pkg/sync/#Map

                                                                                                                                            The Map type is specialized. Most code should use a plain Go map instead, with separate locking or coordination, for better type safety and to make it easier to maintain other invariants along with the map content.

                                                                                                                                            The Map type is optimized for two common use cases: (1) when the entry for a given key is only ever written once but read many times, as in caches that only grow, or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex.

                                                                                                                                            If your usage falls outside of the two use cases which sync.Map is optimized for, it would absolutely be worth looking into replacing your sync.Map with a regular map and a mutex.

                                                                                                                                            I suppose it becomes a question of which has the biggest performance penalty for you, heap allocation + indirection with sync.Map or lock contention with regular map + mutex?

                                                                                                                                            (Also, in most cases, this probably doesn’t matter; make sure you’re not spending a long time improving performance in a part of your code which isn’t actually a performance issue :p)

                                                                                                                                            1. 1

                                                                                                                                              Right - the code “just works(TM)” and it takes around 0.5 seconds to render the JSON file every minute (which I track with metrics just to be safe) so it should be fine to keep as is. This is just a for-fun conversation.

                                                                                                                                              or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex.

                                                                                                                                              I definitely remember reading this sentence and it made me choose sync.Map because it sounds like my usecase. But like you say if I don’t measure it’ll be hard to tell.

                                                                                                                                      2. -1

                                                                                                                                        I don’t know and I didn’t think you could. I’m asking for an example use of an RB tree where using a library would make sense.

                                                                                                                                        1. 6

                                                                                                                                          Here is a popular Go RB tree implementation https://github.com/emirpasic/gods/ that uses Interface{} for the key and value types. Just search github for uses of it… With generics, users of this library would get greater typesafety.

                                                                                                                                          https://github.com/search?q=%22github.com%2Femirpasic%2Fgods%2Ftrees%2Fredblacktree%22&type=Code

                                                                                                                                          1. -2

                                                                                                                                            okay. except i don’t know how to search github for uses of it and your search link brings me to a login page :(

                                                                                                                                            1. 5

                                                                                                                                              To short-circuit this:

                                                                                                                                              At a previous job, I worked on a tool that started various services. The services had different dependencies, each of which needed to be started before the service. We wanted to be able to bring them up with as much parallelism as possible, or have a flag to launch them serially.

                                                                                                                                              A simple approach to doing this correctly is modeling the dependencies as an acyclic graph (if it’s a cyclic graph, you’ve got a problem — you can never bring the services up, because they all depend on each other). To launch them in parallel, launch each one that has its dependencies met. To launch them serially, topologically sort the graph into an array/list/whatever and launch them one by one.

                                                                                                                                              A generic graph implementation would be very useful, as would a topological sort that worked on generic graphs. With Go, you can’t have one that’s type-safe.

                                                                                                                                              Another great use case for graphs: visualizing dependency graphs! You can have an open source graph visualization library, build a graph of whatever it is you’re trying to visualize, and pass it to the library and get a nice visualization of the data.

                                                                                                                                              Graph data structures can be quite useful. Supporting generics makes them type-safe, so you catch errors at compile time instead of runtime. Some other examples of the usefulness of graphs:

                                                                                                                                              • Graphs of friends at a social network (I currently work at one, and we use generic graph data structures all over the place — graphs of people to people, graphs connecting people and photos they’re tagged in, etc)
                                                                                                                                              • Network topology graphs
                                                                                                                                              • Graphs of links between documents

                                                                                                                                              etc.

                                                                                                                                              And it’s not just graphs. How do you write a type-safe function that takes in a list of possibly-null items, and returns a new list with the nulls stripped out? How about a function that takes a map and returns the list of its keys? In Golang, the answer is always copy-paste or give up type safety. In languages with generics, you can trivially write these functions yourself if they’re not in the standard library.

                                                                                                                                              1. 1

                                                                                                                                                thanks, this is a good motivating example.

                                                                                                                                              2. 1

                                                                                                                                                Huh. It had not occurred to me that github search would require a login.

                                                                                                                                      3. 11

                                                                                                                                        To turn the question around, why would you want to manually copy/paste code all over the place when the compiler can do it for you? And while I personally think “DRY” can be over done, not having the same (or very similar) code copy/pasted all over the place seems like a big practical win.

                                                                                                                                        As far as customizing specific data structure and type combinations, most languages with generics have a way to do that, and I’d bet the Go designers thought of it.

                                                                                                                                        1. 2

                                                                                                                                          Copy / paste has got it’s own problems, but it lets you avoid a ton of complexity in the toolchain.

                                                                                                                                          Toolchain development is all about tradeoffs. For instance, I use Typescript; the reference implementation is featureful, but slow to boot, so it keeps a background process alive to cache the heavy lifting, which accumulates state and introduces subtle confusions (eg type errors that don’t exist) until it’s restarted.

                                                                                                                                          For some problem spaces, the problems introduced by copy/paste pale in comparison to the problems introduced by slow, stateful compilers.

                                                                                                                                          1. 7

                                                                                                                                            Copy/paste vs generics is unrelated to compiler bugginess.

                                                                                                                                            If you carefully pick TypeScript as the comparison point, you can make the case that a buggy toolchain is bad (not that most users care, they just restart the compile process when it starts to go bad).

                                                                                                                                            But if you were to pick say ReasonML for comparison, you could say that it’s possible to have a solid generics implementation (much less copy-pasting) and a super-fast, accurate compiler.

                                                                                                                                            I.e. you can have both buggy and non-buggy compilers supporting generics. Hence, unrelated.

                                                                                                                                            1. 2

                                                                                                                                              ReasonML is great!

                                                                                                                                              That said, while the relationship is indirect, it’s there. Adding complexity is never free. It didn’t cost ReasonML speed or reliability, but it costs maintainers time and makes every other feature more difficult to add in an orthogonal way.

                                                                                                                                              1. 2

                                                                                                                                                I think these comparisons are a bit unfair: isn’t Typescript self hosted, whereas ReasonML is written in OCaml? It seems like Typescript would have a very hard time competing.

                                                                                                                                                1. 3

                                                                                                                                                  Being able to use lots of existing OCaml bits is a huge advantage.

                                                                                                                                                  Typescript has been able to compete due to the sheer number of contributors - MS pays quite a large team to work on it (and related stuff like the excellent Language Server Protocol, VScode integration).

                                                                                                                                                  However, large teams tend to produce more complex software (IMO due to the added communications overhead - it becomes easier to add a new thing than find out what existing thing solves the same problem).

                                                                                                                                                  1. 1

                                                                                                                                                    I should clarify my comment was more about comparing performance of the two languages.

                                                                                                                                                    OCaml is a well optimized language that targets native machine code so tooling built in OCaml should be more performant than tooling built in Typescript. As a result, it’s hard to compare the complexity of either tool by the performance of the tool. It’s apples and oranges.

                                                                                                                                                  2. 2

                                                                                                                                                    isn’t Typescript self hosted, whereas ReasonML is written in OCaml? It seems like Typescript would have a very hard time competing.

                                                                                                                                                    That’s a strange argument. If it were very hard for them to compete why would they not use OCaml as well, especially since its contemporary alternative, Flow, was written in OCaml too? Or why would they not make TypeScript as good as a language for writing TypeScript in as OCaml is?

                                                                                                                                                    1. 1

                                                                                                                                                      My comment was more about performance, but it wasn’t very clear. It’s hard for Typescript, which is compiled to Javascript and then interpreted/JITed, to create tooling that’s as fast as a language that builds optimized native code.

                                                                                                                                                      Given that Typescript is self hosted it has the advantage that community involvement is more seamless and I don’t want to downplay the power that brings.

                                                                                                                                                  3. 2

                                                                                                                                                    In the scheme of things, is it more important to have a super-simple compiler codebase, or is it more important to put more power and expressiveness in the hands of users? Note that every mainstream language that started without generics, has now added it.

                                                                                                                                                    1. 1

                                                                                                                                                      IMO, there’s such a thing as a right time to do it.

                                                                                                                                                      In the early years it’s more important to keep the simplicity - there aren’t that many users and you’re still figuring out what you want the language to be (not every feature is compatible with every approach to generics).

                                                                                                                                                      Once you’re ready to start generics you need to answer questions like - do you want monomorphisation or lookup tables? Is boxing an acceptable overhead for the improved debugging ergonomics?

                                                                                                                                                      1. 1

                                                                                                                                                        It seems like Go has been going through exactly the process you’re describing.

                                                                                                                                                  4. 0

                                                                                                                                                    But compilers that support generics are more likely to be buggy. That’s a relation.

                                                                                                                                                    1. 2

                                                                                                                                                      Any source for this rather surprising assertion?

                                                                                                                                                      1. 0

                                                                                                                                                        generics are feature that requires code to implement; code can contain bugs.

                                                                                                                                                        1. 1

                                                                                                                                                          But a self-hosting compiler with generics is likely to be less verbose (because generics) than one without, so it should be less buggy.

                                                                                                                                                          1. 1

                                                                                                                                                            i guess you can’t prove it either way but IME the complexity of algorithms is more likely to cause bugs than verbosity.

                                                                                                                                                  5. 5

                                                                                                                                                    I think Typescript is a straw man. Does this Go implementation of generics slow down the compiler a noticeable amount? There’s nothing inherent to generics that would make compiling them slow.

                                                                                                                                                    On the other hand, copy/pasted code is an ever increasing burden on developer and compile time.

                                                                                                                                                  6. -2

                                                                                                                                                    You are imagining a code base where the same complex data structure is instantiated with two different types. Is that realistic?

                                                                                                                                                    1. 6

                                                                                                                                                      You are imagining a code base where the same complex data structure is instantiated with two different types. Is that realistic?

                                                                                                                                                      Realistic enough that the Linux kernel developers went through the hassle of developing generic associative arrays, circular buffers, and other generic data structures using void*.

                                                                                                                                                      And so did Gnome with GLib, which provides generic lists, hash tables, and trees, along with several others structures, also using void*.

                                                                                                                                                      And the standard libraries of most modern languages include reusable and generic sequence and associative data types, and some times significantly more than that.

                                                                                                                                                      For most data structures, though, focusing on a single code base gives too narrow of a view. Generics allow libraries of data structures to be created, so even though a single code base only use one R* tree (or whatever), that R* tree library can be used as-is by any number of projects.

                                                                                                                                                  7. 8

                                                                                                                                                    The Abstract and Background sections of the draft design doc touch on the motivations. Additionally, each section describing a dimension of the design usually mentions, at least briefly, the motivation for that feature.

                                                                                                                                                    1. 8

                                                                                                                                                      Here is an example that I’ve wanted for ever, and can finally do. Higher order combinators that you can leverage first class functions with!

                                                                                                                                                      Generic map, in go

                                                                                                                                                      1. 1

                                                                                                                                                        That’s the type of thing I have seen as a justification, but I don’t get why that’s so important. Can’t you just use a for loop?

                                                                                                                                                        1. 22

                                                                                                                                                          “Can’t you just …” goes forever. “Can’t you just write your for loop with labels and jumps in assembly?”^^

                                                                                                                                                          For me, it’s all about abstraction. Having low level combinators, like this, that I can compose to build higher level abstractions in a generic way is wonderful.

                                                                                                                                                          ^^: See also whataboutism.

                                                                                                                                                          1. 3

                                                                                                                                                            I’m not sure that composing from higher level abstractions is always such a good idea. I like both Go (hobby projects) and Rust (work!) but I still fell that most of the time I prefer this level of abstraction:

                                                                                                                                                            type Server struct {
                                                                                                                                                            ...
                                                                                                                                                                Handler Handler // handler to invoke, http.DefaultServeMux if nil
                                                                                                                                                            ...
                                                                                                                                                            }
                                                                                                                                                            type Handler interface {
                                                                                                                                                                ServeHTTP(ResponseWriter, *Request)
                                                                                                                                                            }
                                                                                                                                                            

                                                                                                                                                            from this:

                                                                                                                                                             pub fn serve<S, B>(self, new_service: S) -> Server<I, S, E>
                                                                                                                                                                where
                                                                                                                                                                    I: Accept,
                                                                                                                                                                    I::Error: Into<Box<dyn StdError + Send + Sync>>,
                                                                                                                                                                    I::Conn: AsyncRead + AsyncWrite + Unpin + Send + 'static,
                                                                                                                                                                    S: MakeServiceRef<I::Conn, Body, ResBody = B>,
                                                                                                                                                                    S::Error: Into<Box<dyn StdError + Send + Sync>>,
                                                                                                                                                                    B: HttpBody + 'static,
                                                                                                                                                                    B::Error: Into<Box<dyn StdError + Send + Sync>>,
                                                                                                                                                                    E: NewSvcExec<I::Conn, S::Future, S::Service, E, NoopWatcher>,
                                                                                                                                                                    E: H2Exec<<S::Service as HttpService<Body>>::Future, B>,
                                                                                                                                                                {
                                                                                                                                                                ...
                                                                                                                                                            

                                                                                                                                                            Don’t get me wrong, i like type level guarantees and I can see flexibility here, but in my experience with c++, rust and haskell is that generic programming often ends up complicating things to a degree that I personally don’t like.

                                                                                                                                                            1. 1

                                                                                                                                                              I think this is going to be a balance that the community has to find. I don’t regularly program in rust, but I’d be quite surprised if it wasn’t possible to get something close to the Go http API in it. The example you pasted seems complicated for the sake of being complicated. In theory, the Go community has been drilled into thinking in terms of the littlest abstraction that’ll work, which maybe makes it possible to generally avoid generic APIs that don’t actually need to be?

                                                                                                                                                            2. 3

                                                                                                                                                              “Can’t you just” does not go forever. It is a simpler way to say that the alternative is not significantly harder than what’s proposed. Is there some type of task that would be doable using a generic map but unreasonably hard using for loops?

                                                                                                                                                              I feel like Go was designed from the ground up to be written in an imperative style, and composing first order functions is more of a functional style of coding. If I understand, without generics you would be nesting for loops rather than composing map functions, which is no more difficult to understand or write.

                                                                                                                                                              I don’t follow the connection to whataboutism.

                                                                                                                                                              1. 2

                                                                                                                                                                I think it’s fine for your style of writing code to be to use loops and conditionals instead of map and filter. I think it’s a fine way to code that makes more sense in an imperative language. Straight for loops and while loops with if statements inside them is just better, more easily understandable code in an imperative language, in my opinion, than .map(...).filter(...).map(...) etc.

                                                                                                                                                                1. -1

                                                                                                                                                                  Incidentally there is a repo wherein Rob Pike expresses his attitude towards this style of coding:

                                                                                                                                                                  https://github.com/robpike/filter/

                                                                                                                                                                  I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn’t hard.

                                                                                                                                                                  Having written it a couple of years ago, I haven’t had occasion to use it once. Instead, I just use “for” loops.

                                                                                                                                                                  You shouldn’t use it either.

                                                                                                                                                                  1. 2

                                                                                                                                                                    I mean… that’s like … one man’s opinion… man. See also.

                                                                                                                                                                    Generics are going to create a divide in the Go community, and it’s going to be popcorn worthy. There’s no point of adding Generics to the language if this filter thing “shouldn’t be used,” and the community rejects the abstractions that Generics provide.

                                                                                                                                                                    This divide is easily already seen in the community as it relates to test helpers. On the one hand, there’s a set of developers that say “stdlib testing is more than enough.” On the other hand, there are people who want the full testing facilities of junit, with matchers, lots of assert style helpers, etc. Who is right? They all are, because those things work for their respective teams and projects.

                                                                                                                                                                    This general dogmatic approach to language idioms is why I call it “idiotmatic” Go.

                                                                                                                                                                    1. -1

                                                                                                                                                                      I suppose if Ken and Rob wanted generics they would’ve put them in the original language, and there wouldn’t be this controversy. Time to go back to learning Erlang which seems old and dusty enough to not have big language changes and drama.

                                                                                                                                                                2. 16

                                                                                                                                                                  You can’t pass a for loop to anything, you can only write it where you need it. Sure, toy examples look like toy examples, but the fact remains that Go has first-class functions, which should be a nice thing, but it doesn’t actually have a type system rich enough to express 90% of the things that make first-class functions worth having.

                                                                                                                                                                  1. -1

                                                                                                                                                                    You can’t pass a for loop to anything, you can only write it where you need it.

                                                                                                                                                                    right, so the example code could be done with a for loop no problem. is there a more motivating example?

                                                                                                                                                                    it doesn’t actually have a type system rich enough to express 90% of the things that make first-class functions worth having.

                                                                                                                                                                    how do you mean?

                                                                                                                                                                  2. 3

                                                                                                                                                                    Consider composing multiple transformations and filters together. With multiple for loops you have to iterate over the array each time, while by composing maps you only need to iterate once.

                                                                                                                                                                    1. 3

                                                                                                                                                                      Just compose the operations inside the loop.

                                                                                                                                                                      for x in y:
                                                                                                                                                                          ...f(g(x))...
                                                                                                                                                                      
                                                                                                                                                                      1. 2

                                                                                                                                                                        That works in some cases, but it’s pretty easy to find a counter example, too.

                                                                                                                                                                3. 7

                                                                                                                                                                  In terms of collections, the truth is most of the time a map/slice is a good option. Here’s my top two favorite use cases for generics in go:

                                                                                                                                                                  1. Result<T> and functions that compose over them.
                                                                                                                                                                  2. Typesafe versions of sync.Map, sync.Pool, atomic.Value, even a rust like Mutex
                                                                                                                                                                  1. 5

                                                                                                                                                                    Oh man. I hadn’t even considered a better way to do error handling, eg. a Result type. People are gonna get so mad.

                                                                                                                                                                    1. 5

                                                                                                                                                                      Generics isn’t enough to do what people want to do with error handling 99.99% of the time, which is to return early. For that, you either need a macro, such as the aborted try proposal, or syntactic sugar for chaining such “functions that compose over them” (like Haskell’s do notation).

                                                                                                                                                                      Otherwise you end up with callback hell à la JavaScript, and I think nobody wants that in Go.

                                                                                                                                                                      1. 4

                                                                                                                                                                        I was more thinking of something where the if err pattern is enforced via the type system. You’re still not getting there 100%, you could get reasonably close, with a generic Result type that panics when the wrong thing is accessed, forcing you to check always or risk a panic.

                                                                                                                                                                        r := thing()
                                                                                                                                                                        if r.HasError() { handleError(r.Err()) }
                                                                                                                                                                        else v := r.Val() { handleSuccess(v) }
                                                                                                                                                                        

                                                                                                                                                                        And of course it’s easy to question why this is interesting until you do chaining of things, and get a full on, type safe Result monad.

                                                                                                                                                                        r := thing().andThen(func(i int) { ... }).andThen(func(i int) { ... })
                                                                                                                                                                        if r.IsErr() {
                                                                                                                                                                           handleErrForWholeComputation(r.Err())
                                                                                                                                                                        } else {
                                                                                                                                                                           handleSuccessForWholeComputation(r.Val())
                                                                                                                                                                        }
                                                                                                                                                                        

                                                                                                                                                                        The alternative can be seen in things like this where you skirt around the fact that you can’t generically accept a value in one of those called functions. This is also why I said people are going to get so mad. These things are confusing to people who haven’t dealt with them before, and will make Go much more expressive, but less easy to grok without effort.

                                                                                                                                                                  2. 5

                                                                                                                                                                    but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g

                                                                                                                                                                    All of them. Copying code manually is one of the worst things you can do in software development. If it weren’t, why even bother writing functions, ever?

                                                                                                                                                                    My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.

                                                                                                                                                                    The fact that libraries exist that don’t customise in such a way in languages with generics would disprove that notion.

                                                                                                                                                                    1. 6

                                                                                                                                                                      one of the worst things you can do in software development

                                                                                                                                                                      For me that’s “making things unreadable for whoever comes after you”. And sometimes copying a bit of code is the optimal solution for avoid that.

                                                                                                                                                                      1. 0

                                                                                                                                                                        but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g

                                                                                                                                                                        All of them. Copying code manually is one of the worst things you can do in software development. If it weren’t, why even bother writing functions, ever?

                                                                                                                                                                        I disagree with your implication that the use of functions means code should never be copied. For example if you want to use strlcpy() in a portable C program, it makes more sense to put a copy in your source tree rather than relying on an external library. An extra dependency would add more headache than just copying the code.

                                                                                                                                                                        My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.

                                                                                                                                                                        The fact that libraries exist that don’t customise in such a way in languages with generics would disprove that notion.

                                                                                                                                                                        That’s why I said “almost always.” And remember that the existence of a library doesn’t mean it is used with any frequency.

                                                                                                                                                                      2. 3

                                                                                                                                                                        Suppose you have a structure parametrizable by types T1, T2. You’re writing it in Go, so you assume that it’s ok if T1=string, T2=int. Also, in some of the places, you were using int for purpose unrelated to T2 (ie. if T2=foo, then there are still ints left in the source). Another programmer wants to copy-paste the code and change some types. How does he do it?

                                                                                                                                                                        1. 2

                                                                                                                                                                          I think “this would make copy-pasting code harder” is not so compelling an argument. One of the major points of introducing generics is that it would eliminate much of the present need for copy/paste in Go.

                                                                                                                                                                          1. 2

                                                                                                                                                                            Yes it would be harder than a search-and-replace, but that is still abstract and unrelated to any real-world use case.

                                                                                                                                                                            Yes, I’m just counterpointing the parent commenter’s argument. I know the value of generic structures.

                                                                                                                                                                          2. -1

                                                                                                                                                                            Yes it would be harder than a search-and-replace, but that is still abstract and unrelated to any real-world use case.

                                                                                                                                                                        1. 13

                                                                                                                                                                          Is the world even a little bit better because of startups like Instagram, Uber, and Peloton?

                                                                                                                                                                          I don’t know about Instagram and Peloton, but Uber has definitely made life better for me and some of my friends. As a blind friend of mine put it:

                                                                                                                                                                          I cannot drive. I take Uber to and from work every day using a Uber pass. If my alternative were public transit, a bus to a train to another bus or train depending, it would take me 1.5 hours per trip, for a total of 3 hours a day. I would not work in the corporate world if this were the case. […] For blind people, these often-derided services are the difference between a full life where we can participate, and being marginalized outcasts who are constantly late, smelling like the bus, and totally inconvenienced when compared with the guy who can just hop in his car.

                                                                                                                                                                          Sure, the taxis that Uber supplanted should have filled this role, but in practice, they didn’t. So score one for startups.

                                                                                                                                                                          1. 23

                                                                                                                                                                            A big part of the reason the public transit system in most of the US is such a disaster is exactly the same forces that led to the rise of Uber. It doesn’t have to be that way.

                                                                                                                                                                            1. 5

                                                                                                                                                                              Maybe it doesn’t have to be that way. But right now, it is that way. So in the world as it actually is, Uber has done some good. That’s why it bothered me that the OP seemed dismissive about it.

                                                                                                                                                                              1. 10

                                                                                                                                                                                The world is not the US. In the rest of the world Uber is just a way to escape regulations and not get taxed. They don’t provide a better service than taxis, they are just slightly cheaper because they don’t play by the rules.

                                                                                                                                                                                1. 5

                                                                                                                                                                                  I live in the rest of the world, and they do definitely provide a better service than taxis.

                                                                                                                                                                                  Being unable to find the taxi because it arrived around the corner is no longer an issue.

                                                                                                                                                                                  Being able to communicate with the driver despite not sharing a common language is no longer an issue.

                                                                                                                                                                                  Not arriving at the correct destination is no longer an issue.

                                                                                                                                                                                  The potential for the driver to defraud the customer by “taking the scenic route” is no longer an issue.

                                                                                                                                                                                  1. 4

                                                                                                                                                                                    Well, but these are not improvements brought by Uber, but by using an App to plan the ride. This is done by taxi companies too. Clearly in many places this kind of digitalization is lagging behind. Clearly I’m not saying that the rest of the world has better taxi companies and public transportation than the US but believing that these things are intrinsic to the product and not dependent on the context is the problem. Context that in other places might be even worse than the US.

                                                                                                                                                                                    1. 6

                                                                                                                                                                                      These are not improvements brought by Uber, but by using an App to plan the ride.

                                                                                                                                                                                      Uber pioneered this. I think separating the good things that Uber did — creating an app-based transportation service — from Uber-the-company and then leaving only their failures, isn’t particularly useful. Anything can be torn down that way: of course if you strip out the good parts, only the bad parts remain.

                                                                                                                                                                                      1. 4

                                                                                                                                                                                        Uber pioneered this

                                                                                                                                                                                        In the US maybe. In Europe we have different apps that work with existing taxi networks and regulations and they existed way before Uber spread to Europe or in places were Uber is illegal. Hailo, myTaxi, Clever Taxi were all founded when Uber wasn’t even available to the public and almost two years before it came to Europe. Just to name a few that were succesful.

                                                                                                                                                                                        I mean, once smarphones became available to a larger population, these kinds of apps were quite obvious as much as those to interact with public transportation without tickets or paper. Uber became ginormous not because they were offering anything that dozens of other companies weren’t offering, but like every unicorn they grew exponentially because they were better at attracting investors and avoiding regulations.

                                                                                                                                                                                        1. 3

                                                                                                                                                                                          Uber was founded years before Clever Taxi or Hailo. And myTaxi was nothing like Uber — they didn’t even process payments — until they pivoted in 2012, long after Uber had proven the model.

                                                                                                                                                                                          Uber became ginormous not because they were offering anything that dozens of other companies weren’t offering, but like every unicorn they grew exponentially because they were better at attracting investors and avoiding regulations.

                                                                                                                                                                                          Uber succeeded because they offered a better product. Hailo and Uber eventually went head to head in NYC, and Hailo failed because they offered a significantly worse experience and couldn’t get enough taxi drivers to sign up for it to be worth using.

                                                                                                                                                                                          https://fortune.com/2014/10/14/hailo-taxi-app-funding-failure/

                                                                                                                                                                                      2. 2

                                                                                                                                                                                        Sure. I’m not defending or praising Uber specifically, but I doubt ride planning apps would have materialised without competition from Uber and similar.

                                                                                                                                                                                        In all the places I travel to in the world currently, the choices are either to call a local taxi company and suffer all the issues I listed, or just use the local Uber-like and suffer none of them.

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          This is flatly false. I have friends at taxi companies that were approached by development shops to make an app years before Uber was around. The main difference is Uber had global ambitions and wanted to own the drivers. Claiming that there would be no apps like this without Uber is like saying we needed DoorDash for food delivery. It’s simply ahistorical.

                                                                                                                                                                                          1. 2

                                                                                                                                                                                            How is it false? Ok, there was an approach. What was the outcome? And even if that one taxi company decided to invest in that technology, how can you extrapolate that to the rest of the global market? Most taxi companies still don’t have anything like this!

                                                                                                                                                                                            And your analogy is quite bad. It’s not like saying we needed DoorDash to have food delivery. That would be analogous to me saying we needed Uber to have taxis, which I didn’t say.

                                                                                                                                                                                            1. 1

                                                                                                                                                                                              Let me pull back a bit. I don’t think a ride taxi app is Uber’s innovation. I think it is striving to be a global taxi app. Something that I think no previous company seems to even have aspired to be. You said ride sharing apps wouldn’t have emerged without Uber and that’s flatly false. I saw pitches and used several years before Uber appeared.

                                                                                                                                                                                              1. 1

                                                                                                                                                                                                Is there a meaningful difference to me as a consumer between ride hailing apps existing only in a few locations that I will never visit and ride hailing apps not existing at all?

                                                                                                                                                                                                That’s the point I am trying to make.

                                                                                                                                                                                                1. 1

                                                                                                                                                                                                  The only meaningful difference is if you live in those places. There are regional taxi apps that pay their drivers better and are made by local programmers. If you want to say that Uber is one of the first that got global reach we have no disagreements but that’s not what I took you to mean.

                                                                                                                                                                              2. 15

                                                                                                                                                                                I’m happy that your friend has found a greater quality of life by using Uber. I’d just like to mention another model.

                                                                                                                                                                                In Sweden (at least in Stockholms län), a disabled or blind person can apply for Färdtjänst - transport help. Basically,

                                                                                                                                                                                • access to public transport is free
                                                                                                                                                                                • one can get a cab ride anywhere in the greater Stockholm area at cost. Usually the cab ride is shared with other people.

                                                                                                                                                                                All of this is of course financed by taxes ;)

                                                                                                                                                                                Caveats:

                                                                                                                                                                                • just like with Uber, peak traffic times makes it hard to get a fast ride
                                                                                                                                                                                • the authorities negotiate with cab companies and pay a fixed price, so that it’s not always in a cab drivers best interest to accept this ride. My wife has told me of surly or impatient drivers
                                                                                                                                                                                1. 2

                                                                                                                                                                                  Replying to my own comment, as I cannot edit the previous one any longer.

                                                                                                                                                                                  I have been informed that this kind of system also exists in the USA, in Pennsylvania: https://myaccessride.com/

                                                                                                                                                                                2. 9

                                                                                                                                                                                  Here in Berlin there’s a service called Berlkönig (a wordplay combining Berlin with Erlkönig) which is run by the main public transport organization, and it runs shared rides which tend to be far cheaper than Uber. Uber will pick you up a few minutes faster but may cost at least twice as much. I tend to take Berlkönig once per month when I end up at a friend’s house late on a weekday night when the public transit is running less frequently.

                                                                                                                                                                                  When I lived in NYC, I was frequently struck by how it felt like kind of an island of blind-friendliness in a country where public transit was so actively destroyed by the auto lobbies etc… When I return to NYC I am struck by a bit of a feeling that the public transit has gotten worse (I experienced it just before hurricane sandy and the nice years afterward when my subway line had had much of its equipment replaced). But I’m not sure if that’s true or just my reaction now that I’m used to a different system. Uber cost about 10-20x the public transit rate to get me home, and often took twice as long, but it was relaxing. I worked at a place that would reimburse Uber trips home so I would take it when I felt exhausted, maybe once every two weeks.

                                                                                                                                                                                  Other systems exist and work well.

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    Uber also provides shared rides in the US that are dramatically cheaper. If they don’t offer them in Germany… Maybe local regulation prohibits that? Not sure. The shared rides are also more profitable for Uber: they only have to pay one driver, for many more people.

                                                                                                                                                                                1. 12

                                                                                                                                                                                  Let me play some alternative history here…

                                                                                                                                                                                  Take something like a shopping cart. If you tried to do this before cookies, when people put a product into a shopping cart on the first page they visited, as soon as they click on anything else, the browser would think this was a completely new visit

                                                                                                                                                                                  You could create a server-side session on the first POST request adding an item to the shopping cart and add its generated id to all links. A user could even bookmark any of those links to return to their session… If only browsers didn’t rely on cookies for remembering and bookmarking UI wasn’t abandoned.

                                                                                                                                                                                  Take subscriptions. Without cookies, we have to ask people to manually log in every single time they click on a link. Not just the first time, but between every page view.

                                                                                                                                                                                  HTTP has an extensible authentication mechanism built-in. If browser’s didn’t rely on sites abusing cookies for user sessions, we could have a universal standard login/logout button working on every site. Instead HTTP authentication UI was abandoned and never progressed beyond incomprehensible modal windows.

                                                                                                                                                                                  Etc., etc. What I’m saying is, REST (and HTTP/2, as a reference implementation) was designed with all of these in mind (often misunderstood HATEOAS is about that specifically). But, for better or for worth cookies happened and technology went another way. But it was a historical accident, not a technological limitation.

                                                                                                                                                                                  1. 7

                                                                                                                                                                                    I think this has many more downsides than first-party cookies. URLs are designed to be shareable; encoding session state into the URL is asking for users to get their accounts taken over through social engineering. “Send me a link” sounds much less malicious than “open your settings page and go to the Advanced section and find your cookies and copy the cookie for this website and send it to me.” It would probably even happen by accident on sites like this one (and Reddit, HN, Twitter, Facebook, etc).

                                                                                                                                                                                    Not to mention how simple it would make various web attacks, e.g. abusing the Referer header to steal credentials. All you need to do is be able to modify an href attribute and you have an account takeover — that’s not much defense-in-depth.

                                                                                                                                                                                    IMO first-party cookies, enforced over SSL with the Secure directive, and made inaccessible to JavaScript via the HttpOnly directive, are actually a fairly good mechanism for handling session state. It’s third-party cookies that create tracking issues.

                                                                                                                                                                                    (Personally I wish cookies were inaccessible to JS by default, and allowing JS cookie access was opt-in, but alas. I also wish sites served over SSL automatically defaulted to Secure cookies. Small but probably meaningful nits.)

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      The URL holding state already happens out in the world. One way around your issue would be when you load the state on the server, check if the IP Address/etc changed, plus the time since last seen, etc. If stuff changed , then chances are it’s not the same user, and you can re-prompt for auth, just to verify them again.

                                                                                                                                                                                      I don’t disagree about 1st party, TLS sent, httpOnly cookies are also an OK way to handle this.

                                                                                                                                                                                      1. 3

                                                                                                                                                                                        One way around your issue would be when you load the state on the server, check if the IP Address/etc changed, plus the time since last seen, etc. If stuff changed , then chances are it’s not the same user, and you can re-prompt for auth, just to verify them again.

                                                                                                                                                                                        This complex and error-prone. It also put the burden on every application developer to understand and know how to make session in URL secure. Too many applications still struggle with basic and solved issues like SQL injection and XSS, I don’t think we would have need yet another common attack vector for web application.

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          I don’t disagree with your point, but I’ll just add nobody can get cookies right either, so it’s the same issue(s) for cookies, just in different ways.

                                                                                                                                                                                          1. 2

                                                                                                                                                                                            It still seems to me that HttpOnly Secure first-party cookies are better than encoding sessions into URLs. The mitigation factors you describe with URLs are heuristics that can be worked around; for example, if you’re on public WiFi, you and your attacker may share an IP address. Similarly, timing checks are not a strong guarantee.

                                                                                                                                                                                            People do manage to mess up cookie security as well. But it’s much easier to get cookies right than getting sessions-in-URLs right, and when you get them right you get strong guarantees instead of heuristics. And when you get them wrong, insecure cookies are harder to exploit than insecure URLs: insecure cookies need an attacker who is monitoring traffic or otherwise actively attempting an exploit, whereas anyone could exploit an insecure URL posted to Reddit.

                                                                                                                                                                                      2. 1

                                                                                                                                                                                        Fair point, yes. I didn’t think about it.

                                                                                                                                                                                      3. 4

                                                                                                                                                                                        Your comment is bringing me memories of ?PHPSESSIONID. One fairly serious drawback is with link sharing. Browsing a shop you’re logged into, you want to share a link to an item with a friend and you inadvertently send them a way into your session.