1. 12

    Is the world even a little bit better because of startups like Instagram, Uber, and Peloton?

    I don’t know about Instagram and Peloton, but Uber has definitely made life better for me and some of my friends. As a blind friend of mine put it:

    I cannot drive. I take Uber to and from work every day using a Uber pass. If my alternative were public transit, a bus to a train to another bus or train depending, it would take me 1.5 hours per trip, for a total of 3 hours a day. I would not work in the corporate world if this were the case. […] For blind people, these often-derided services are the difference between a full life where we can participate, and being marginalized outcasts who are constantly late, smelling like the bus, and totally inconvenienced when compared with the guy who can just hop in his car.

    Sure, the taxis that Uber supplanted should have filled this role, but in practice, they didn’t. So score one for startups.

    1. 23

      A big part of the reason the public transit system in most of the US is such a disaster is exactly the same forces that led to the rise of Uber. It doesn’t have to be that way.

      1. 5

        Maybe it doesn’t have to be that way. But right now, it is that way. So in the world as it actually is, Uber has done some good. That’s why it bothered me that the OP seemed dismissive about it.

        1. 10

          The world is not the US. In the rest of the world Uber is just a way to escape regulations and not get taxed. They don’t provide a better service than taxis, they are just slightly cheaper because they don’t play by the rules.

          1. 4

            I live in the rest of the world, and they do definitely provide a better service than taxis.

            Being unable to find the taxi because it arrived around the corner is no longer an issue.

            Being able to communicate with the driver despite not sharing a common language is no longer an issue.

            Not arriving at the correct destination is no longer an issue.

            The potential for the driver to defraud the customer by “taking the scenic route” is no longer an issue.

            1. 4

              Well, but these are not improvements brought by Uber, but by using an App to plan the ride. This is done by taxi companies too. Clearly in many places this kind of digitalization is lagging behind. Clearly I’m not saying that the rest of the world has better taxi companies and public transportation than the US but believing that these things are intrinsic to the product and not dependent on the context is the problem. Context that in other places might be even worse than the US.

              1. 5

                These are not improvements brought by Uber, but by using an App to plan the ride.

                Uber pioneered this. I think separating the good things that Uber did — creating an app-based transportation service — from Uber-the-company and then leaving only their failures, isn’t particularly useful. Anything can be torn down that way: of course if you strip out the good parts, only the bad parts remain.

                1. 4

                  Uber pioneered this

                  In the US maybe. In Europe we have different apps that work with existing taxi networks and regulations and they existed way before Uber spread to Europe or in places were Uber is illegal. Hailo, myTaxi, Clever Taxi were all founded when Uber wasn’t even available to the public and almost two years before it came to Europe. Just to name a few that were succesful.

                  I mean, once smarphones became available to a larger population, these kinds of apps were quite obvious as much as those to interact with public transportation without tickets or paper. Uber became ginormous not because they were offering anything that dozens of other companies weren’t offering, but like every unicorn they grew exponentially because they were better at attracting investors and avoiding regulations.

                  1. 2

                    Uber was founded years before Clever Taxi or Hailo. And myTaxi was nothing like Uber — they didn’t even process payments — until they pivoted in 2012, long after Uber had proven the model.

                    Uber became ginormous not because they were offering anything that dozens of other companies weren’t offering, but like every unicorn they grew exponentially because they were better at attracting investors and avoiding regulations.

                    Uber succeeded because they offered a better product. Hailo and Uber eventually went head to head in NYC, and Hailo failed because they offered a significantly worse experience and couldn’t get enough taxi drivers to sign up for it to be worth using.

                    https://fortune.com/2014/10/14/hailo-taxi-app-funding-failure/

                2. 2

                  Sure. I’m not defending or praising Uber specifically, but I doubt ride planning apps would have materialised without competition from Uber and similar.

                  In all the places I travel to in the world currently, the choices are either to call a local taxi company and suffer all the issues I listed, or just use the local Uber-like and suffer none of them.

                  1. 1

                    This is flatly false. I have friends at taxi companies that were approached by development shops to make an app years before Uber was around. The main difference is Uber had global ambitions and wanted to own the drivers. Claiming that there would be no apps like this without Uber is like saying we needed DoorDash for food delivery. It’s simply ahistorical.

                    1. 2

                      How is it false? Ok, there was an approach. What was the outcome? And even if that one taxi company decided to invest in that technology, how can you extrapolate that to the rest of the global market? Most taxi companies still don’t have anything like this!

                      And your analogy is quite bad. It’s not like saying we needed DoorDash to have food delivery. That would be analogous to me saying we needed Uber to have taxis, which I didn’t say.

                      1. 1

                        Let me pull back a bit. I don’t think a ride taxi app is Uber’s innovation. I think it is striving to be a global taxi app. Something that I think no previous company seems to even have aspired to be. You said ride sharing apps wouldn’t have emerged without Uber and that’s flatly false. I saw pitches and used several years before Uber appeared.

                        1. 1

                          Is there a meaningful difference to me as a consumer between ride hailing apps existing only in a few locations that I will never visit and ride hailing apps not existing at all?

                          That’s the point I am trying to make.

                          1. 1

                            The only meaningful difference is if you live in those places. There are regional taxi apps that pay their drivers better and are made by local programmers. If you want to say that Uber is one of the first that got global reach we have no disagreements but that’s not what I took you to mean.

        2. 15

          I’m happy that your friend has found a greater quality of life by using Uber. I’d just like to mention another model.

          In Sweden (at least in Stockholms län), a disabled or blind person can apply for Färdtjänst - transport help. Basically,

          • access to public transport is free
          • one can get a cab ride anywhere in the greater Stockholm area at cost. Usually the cab ride is shared with other people.

          All of this is of course financed by taxes ;)

          Caveats:

          • just like with Uber, peak traffic times makes it hard to get a fast ride
          • the authorities negotiate with cab companies and pay a fixed price, so that it’s not always in a cab drivers best interest to accept this ride. My wife has told me of surly or impatient drivers
          1. 2

            Replying to my own comment, as I cannot edit the previous one any longer.

            I have been informed that this kind of system also exists in the USA, in Pennsylvania: https://myaccessride.com/

          2. 9

            Here in Berlin there’s a service called Berlkönig (a wordplay combining Berlin with Erlkönig) which is run by the main public transport organization, and it runs shared rides which tend to be far cheaper than Uber. Uber will pick you up a few minutes faster but may cost at least twice as much. I tend to take Berlkönig once per month when I end up at a friend’s house late on a weekday night when the public transit is running less frequently.

            When I lived in NYC, I was frequently struck by how it felt like kind of an island of blind-friendliness in a country where public transit was so actively destroyed by the auto lobbies etc… When I return to NYC I am struck by a bit of a feeling that the public transit has gotten worse (I experienced it just before hurricane sandy and the nice years afterward when my subway line had had much of its equipment replaced). But I’m not sure if that’s true or just my reaction now that I’m used to a different system. Uber cost about 10-20x the public transit rate to get me home, and often took twice as long, but it was relaxing. I worked at a place that would reimburse Uber trips home so I would take it when I felt exhausted, maybe once every two weeks.

            Other systems exist and work well.

            1. 1

              Uber also provides shared rides in the US that are dramatically cheaper. If they don’t offer them in Germany… Maybe local regulation prohibits that? Not sure. The shared rides are also more profitable for Uber: they only have to pay one driver, for many more people.

          1. 12

            Let me play some alternative history here…

            Take something like a shopping cart. If you tried to do this before cookies, when people put a product into a shopping cart on the first page they visited, as soon as they click on anything else, the browser would think this was a completely new visit

            You could create a server-side session on the first POST request adding an item to the shopping cart and add its generated id to all links. A user could even bookmark any of those links to return to their session… If only browsers didn’t rely on cookies for remembering and bookmarking UI wasn’t abandoned.

            Take subscriptions. Without cookies, we have to ask people to manually log in every single time they click on a link. Not just the first time, but between every page view.

            HTTP has an extensible authentication mechanism built-in. If browser’s didn’t rely on sites abusing cookies for user sessions, we could have a universal standard login/logout button working on every site. Instead HTTP authentication UI was abandoned and never progressed beyond incomprehensible modal windows.

            Etc., etc. What I’m saying is, REST (and HTTP/2, as a reference implementation) was designed with all of these in mind (often misunderstood HATEOAS is about that specifically). But, for better or for worth cookies happened and technology went another way. But it was a historical accident, not a technological limitation.

            1. 7

              I think this has many more downsides than first-party cookies. URLs are designed to be shareable; encoding session state into the URL is asking for users to get their accounts taken over through social engineering. “Send me a link” sounds much less malicious than “open your settings page and go to the Advanced section and find your cookies and copy the cookie for this website and send it to me.” It would probably even happen by accident on sites like this one (and Reddit, HN, Twitter, Facebook, etc).

              Not to mention how simple it would make various web attacks, e.g. abusing the Referer header to steal credentials. All you need to do is be able to modify an href attribute and you have an account takeover — that’s not much defense-in-depth.

              IMO first-party cookies, enforced over SSL with the Secure directive, and made inaccessible to JavaScript via the HttpOnly directive, are actually a fairly good mechanism for handling session state. It’s third-party cookies that create tracking issues.

              (Personally I wish cookies were inaccessible to JS by default, and allowing JS cookie access was opt-in, but alas. I also wish sites served over SSL automatically defaulted to Secure cookies. Small but probably meaningful nits.)

              1. 1

                The URL holding state already happens out in the world. One way around your issue would be when you load the state on the server, check if the IP Address/etc changed, plus the time since last seen, etc. If stuff changed , then chances are it’s not the same user, and you can re-prompt for auth, just to verify them again.

                I don’t disagree about 1st party, TLS sent, httpOnly cookies are also an OK way to handle this.

                1. 3

                  One way around your issue would be when you load the state on the server, check if the IP Address/etc changed, plus the time since last seen, etc. If stuff changed , then chances are it’s not the same user, and you can re-prompt for auth, just to verify them again.

                  This complex and error-prone. It also put the burden on every application developer to understand and know how to make session in URL secure. Too many applications still struggle with basic and solved issues like SQL injection and XSS, I don’t think we would have need yet another common attack vector for web application.

                  1. 1

                    I don’t disagree with your point, but I’ll just add nobody can get cookies right either, so it’s the same issue(s) for cookies, just in different ways.

                    1. 2

                      It still seems to me that HttpOnly Secure first-party cookies are better than encoding sessions into URLs. The mitigation factors you describe with URLs are heuristics that can be worked around; for example, if you’re on public WiFi, you and your attacker may share an IP address. Similarly, timing checks are not a strong guarantee.

                      People do manage to mess up cookie security as well. But it’s much easier to get cookies right than getting sessions-in-URLs right, and when you get them right you get strong guarantees instead of heuristics. And when you get them wrong, insecure cookies are harder to exploit than insecure URLs: insecure cookies need an attacker who is monitoring traffic or otherwise actively attempting an exploit, whereas anyone could exploit an insecure URL posted to Reddit.

                2. 1

                  Fair point, yes. I didn’t think about it.

                3. 4

                  Your comment is bringing me memories of ?PHPSESSIONID. One fairly serious drawback is with link sharing. Browsing a shop you’re logged into, you want to share a link to an item with a friend and you inadvertently send them a way into your session.

                1. 7

                  I think the author of this post is correct in surmising that the proliferation of feature-rich, graphical editors such as Visual Studio Code, Atom, and Sublime Text have a direct correlation to the downturn of Emacs usage in recent years. This might seem a little simplistic, but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python. Choosing between the top two interpreted languages for your editor’s scripting system is going to attract more people than choosing a dialect of Lisp. The fact that Emacs Lisp is one of the most widely-used Lisp dialects tells you something about how popular Lisp is for normal day-to-day programming. It’s not something that most are familiar with, so the learning curve to configuring Emacs is high. Meanwhile, VS Code and Atom let you configure the program with JSON and JavaScript, which is something I believe most developers in the world are familiar with at least on a surface level. If you can get the same features from an editor that is written in a familiar language, why would you choose an editor that requires you to learn something entirely different?

                  I use Emacs, but only for Org-Mode, and I can tell you with experience that editing the configs takes a bit of getting used to. I mostly use Vim and haven’t really compared it to Emacs here because I don’t feel like the two are easily comparable. Although Vim’s esoteric “VimL” scripting language suffers from the same problems as Emacs, the fact that it can be started up and used with relatively minimal configuration means that a lot of users won’t ever have to write a line of VimL in their lives.

                  1. 13

                    I might be mistaken, but I don’t think that most “feature-rich, graphical editors”-users don’t customize their editor using “JS or Python”, or at least not in the same way as one would customize Emacs. Emacs is changed by being programmed, your init.el or .emacs is an elisp program that initializes the system (setting the customize-system aside). From what I’ve seen of Atom, VS Code and the like is that you have JSON and perhaps a prettier interface. An Emacs user should be encouraged to write their own commands, that’s why the *scratch* buffer is created. It might just be the audience, but I don’t hear of VS Code users writing their own javascript commands to program their environment.

                    It’s unusual from outside, I guess. And it’s a confusion that’s reflected in the choice of words. People say “Emacs has a lot of plugins”, as that’s what they are used to from other editors. Eclipse, Atom, etc. offer an interface to extend the “core”. The difference is reflected in the sharp divide between users and (plugin) developers. Compare that to Emacs where you “customize” by extending the environment. For that reason the difference “users” and “developers” is more of a gradient, or that’s at least how I see it. And ultimately, Lisp plays a big part in this.

                    It was through Emacs that I learned to value Free Software, not as in “someone can inspect the code” or “developers can fork it”, but as in “I can control my user environment”, even with it’s warts. Maybe it’s not too popular, or maybe there are just more easy alternatives nowadays, but I know that I won’t compromise on this. That’s also probably why we’re dying :/

                    1. 12

                      Good defaults helps. People like to tweak, but they don’t want to tweak to even get started. There’s also how daunting it can appear. I know with Vim I can get started on any system, and my preferred set of tweaks is less than five lines of simple config statements (Well, Vim is terse and baroque, but it’s basically just setting variables, not anything fancy.). Emacs, there’s a lot to deal with, and a lot has to be done by basically monkey-patching - not very friendly to start with when all you want is say, “keep dired from opening multiple buffers”.

                      Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

                      1. 3

                        Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

                        I agree on the defaults (not that I find vanilla Emacs unusable, either), but I don’t really agree with this. It seem to be a common meme that Elisp is a “bad lisp”, which I guess is not wrong when compared to some Scheme and CL implementations (insofar one understands “bad” as “not as good as”). But it’s still a very enjoyable language, and perhaps it’s just me, but I have a lot more fun working with Elisp that with Python, Haskell or whatever. For all it’s deficiencies it has the strong point of being extremely well integrated into Emacs – because the entire thing is built on top of it.

                        1. 1

                          I also have a lot more fun working with Elisp than most other languages, but I think in a lot of regards it really does fail. Startup being significantly slower than I feel that it could or should be is my personal biggest gripe. These days, people like to talk about Lisp as a functional language, and I know that rms doesn’t subscribe to that but the fact that by default I’m blocked from writing recursive functions is quite frustrating.

                      2. 3

                        It’s true, emacs offers a lot more power, but it requires a time investment in order to really make use of it. Compare that with an editor or IDE where you can get a comfortable environment with just a few clicks. Judging by the popularity of macOS vs Linux for desktop/workstation use, I would imagine the same can be said for editors. Most people want something that “just works” because they’re busy with other problems during the course of their day. These same people probably aren’t all that interested in learning a the Emacs philosophy and getting to work within a Lisp Machine, but there are definitely a good amount of people who are. I don’t think Emacs is going anywhere, but it’s certainly not the best choice for most people anymore.

                        1. 7

                          I can’t find it now, but someone notes something along those lines in the thread, saying that Emacs doesn’t offer “instant gratifications”, but requires effort to get into. And at some point it’s just a philosophical discussion on what is better. I, who has invested the time and effort, certainly think it is worth it, and believe that it’s the case for many others too.

                          1. 7

                            Most people want something that “just works” because they’re busy with other problems during the course of their day.

                            This has been my experience. I learned to use Vim when I was in school and had lots of free time to goof around with stuff. I could just as easily have ended up using Emacs, I chose Vim more or less at random.

                            But these days I don’t even use Vim for programming (I still use Vimwiki for notes) because I simply don’t have time to mess around with my editor or remember what keyboard shortcuts the Python plugin uses versus the Rust plugin, or whatever. I use JetBrains IDEs with the Vim key bindings plugin, and that’s pretty much all the customization I do. Plus JB syncs my plugins and settings across different IDEs and even different machines, with no effort on my part.

                            So, in some sense, I “sold out” and I certainly sacrificed some freedom. But it was a calculated and conscious trade-off because I have work to do and (very) finite time in which to do it.

                            1. 3

                              IDEs are actually quite complicated and come with their own sets of quirks that people have to learn. I was very comfortable with VS Code because I’ve been using various Microsoft IDE’s through the years, and the UI concepts have been quite consistent among them. But a new user still needs to internalize the project view, the editing view, the properties view, and the runtime view, just as I as a new user of Emacs had to internalize its mechanisms almost 30 years ago.

                              It’s “easier” now because of the proliferation of guides and tutorials, and also that GUI interfaces are probably inheritably more explorable than console ones. That said, don’t underestimate the power of M-x apropos when trying to find some functionality in Emacs…

                            2. 2

                              Although I tend to use Vim, I actually have configured Atom with custom JS and CSS when I’ve used it (it’s not just JSON; you can easily write your own JS that runs in the same process space as the rest of the editor, similar to Elisp and Emacs). I don’t think the divide is as sharp as you might think; I think that Emacs users are more likely to want to configure their editors heavily than Atom or VSCode users (because, after all, Elisp configuration is really the main draw of Emacs — without Elisp, Emacs would just be an arcane, needlessly difficult to use text editor); since Atom and VSCode are also just plain old easy-to-use text editors out of the box, with easy built-in package management, many Atom/VSCode users don’t find the need to write much code, especially at first.

                              It’s quite easy to extend Atom and VSCode with JS/CSS, really. That was one of the selling points of Atom when it first launched: a modern hackable text editor. VSCode is similar, but appears to have become more popular by being more performant.

                              1. 2

                                Yeah, use plugins in every editor, text or GUI. I’ve never written a plugin in my life, nor will I. I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                                1. 3

                                  I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                                  That’s my point. Emacs offers the possibility that extending the environment isn’t a detour but a method to achieve your goals.

                                  1. 4

                                    Writing a new major mode (or, hell, even a new minor mode) is absolutely a detour. I used emacs for the better part of a decade and did each several times.

                                    I eventually got tired of it, and just went to what had the better syntax support for my primary language (rust) at the time (vim). I already used evil so the switch was easy enough.

                                    I use VSCode with the neovim backend these days because the language server support is better (mostly: viewing docstrings from RLS is nicer than from a panel in neovim), and getting set up for a new language is easier than vim/emacs.

                                    1. 1

                                      It’s not too surprising for me that between automating a task by writing a command and starting an entire new project that the line of a detour can be drawn. But even still, I think it’s not that clear. One might start by writing a few commands, and then bundle them together in a minor mode. That’s little more than creating a map and writing a bare minimal define-minor-mode.

                                      In general, it’s just like any automation, imo. It can help you in the long term, but it can get out of hand.

                              2. 6

                                but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python

                                I disagree.

                                I think most people care that a healthy extension ecosystem that just works and is easy to tap in to is there - they basically never really want to have to create a plugin. To achieve that, you need to attract people to create plugins, which is where your point comes in.

                                As a thought experiment, if I’m a developer who’s been using VS Code or some such for the longest time, where it’s trivial to add support for new languages through an almost one-click extension system, what’s the push that has me looking for new editors and new pastures?

                                I can see a few angles myself - emacs or vim are definitely snappier, for instance.

                                EDIT: I just spotted Hashicorp are officially contributing to the Terraform VS Code extension. At this point I wonder if VS Code’s extension library essentially has critical mass.

                                1. 3

                                  Right: VS Code and Sublime Text aren’t nearly as featureful as Emacs, and they change UIs without warning, making muscle memory a liability instead of an asset. They win on marketing and visual flash for their popularity, which Emacs currently doesn’t have, but Emacs is what you make of it, and rewards experience.

                                1. 2

                                  It would be fun to know which one is Google, which is Facebook, Lyft, etc as we know she’s worked at those places.

                                  Some of this just sounds incredibly broken.

                                  1. 2

                                    C is Google.

                                    1. 1

                                      D or E is Facebook — you can run Linux natively on your laptop at FB (you can choose a Thinkpad if you want native Linux support), but I’m not sure how well-supported virtualization on a Mac laptop is if you choose to go with a Mac.

                                      Things also may have changed since she worked at FB, since that was a while ago; maybe they didn’t support native Linux laptops back then, and only supported virtualized Linux.

                                      At least as of a few years ago Lyft was A, as far as I’ve heard (had a former coworker from Lyft). But things may have changed significantly since then.

                                      TBH most startups start with A and gradually move towards something else, typically when A becomes painful to work with. Dropbox was A for a while, Airbnb was A for a while, etc. I think this is mostly a function of tooling investment and maturity at a company.

                                    1. 1

                                      a proper solution for true fault isolation would have been one microservice per queue per customer, but that would have required over 10,000 microservices

                                      …Why would Segment create different, individual microservices for every customer?

                                      1. 1

                                        I suspect they really meant “one worker process/queue per customer” so one customer’s sudden influx of work doesn’t delay another customer’s work. If it’s a Rails monolith, you could conceivably start 10,000 Sidekiq processes to handle each customer’s workload.

                                      1. 12

                                        This mostly seems like a reaction against Cloudflare promoting RPKI, from people/ISPs who don’t want to bother with RPKI. Since IMO RPKI is a good, valuable improvement to security, I’m a bit disinclined to give much credence to this.

                                        E.g. The claim that Cloudflare is supposedly stifling combining their services with other providers because you have to… pay to do so, instead of using their free plan? That doesn’t sound malicious to me. If you’ve never paid them a dime, why do you expect them to provide any free service you want? It’s not even expensive: it’s $5/month for all the features of their Business plan as long as you’re under 10MM requests/month (and fairly reasonable pricing over that too). It’s not like the ISPs who are complaining about RPKI are giving away their services for free; why should they complain about Cloudflare charging reasonable prices for their services too?

                                        1. 1

                                          […] Cloudflare promoting RPKI, from people/ISPs who don’t want to bother with RPKI.

                                          More like engaging in a counterproductive behaviour, i.e. public naming and shaming of those who don’t yet (fully) support RPKI.

                                          That being said, please look at the tag and the disclaimer/footer on the page:

                                          While this site is a parody, it may contain factual information. :)

                                          ;^)

                                        1. 27

                                          It’s worth linking to A&A’s (a British ISP) response to this: https://www.aa.net.uk/etc/news/bgp-and-rpki/

                                          1. 16

                                            Our (Cloudflare’s) director of networking responded to that on Twitter: https://twitter.com/Jerome_UZ/status/1251511454403969026

                                            there’s a lot of nonsense in this post. First, blocking our route statically to avoid receiving inquiries from customers is a terrible approach to the problem. Secondly, using the pandemic as an excuse to do nothing, when precisely the Internet needs to be more secure than ever. And finally, saying it’s too complicated when a much larger network than them like GTT is deploying RPKI on their customers sessions as we speak. I’m baffled.

                                            (And a long heated debate followed that.)

                                            A&A’s response on the one hand made sense - they might have fewer staff available - but on the other hand RPKI isn’t new and Cloudflare has been pushing carriers towards it for over a year, and route leaks still happen.

                                            Personally as an A&A customer I was disappointed by their response, and even more so by their GM and the official Twitter account “liking” some very inflammatory remarks (“cloudflare are knobs” was one, I believe). Very unprofessional.

                                            1. 15

                                              Hmm… I do appreciate the point that route signing means a court can order routes to be shut down, in a way that wouldn’t have been as easy to enforce without RPKI.

                                              I think it’s essentially true that this is CloudFlare pushing its own solution, which may not be the best. I admire the strategy of making a grassroots appeal, but I wonder how many people participating in it realize that it’s coming from a corporation which cannot be called a neutral party?

                                              I very much believe that some form of security enhancement to BGP is necessary, but I worry a lot about a trend I see towards the Internet becoming fragmented by country, and I’m not sure it’s in the best interests of humanity to build a technology that accelerates that trend. I would like to understand more about RPKI, what it implies for those concerns, and what alternatives might be possible. Something this important should be a matter of public debate; it shouldn’t just be decided by one company aggressively pushing its solution.

                                              1. 4

                                                This has been my problem with a few other instances of corporate messaging. Cloudflare and Google are giant players that control vast swathes of the internet, and they should be looked at with some suspicion when they pose as simply supporting consumers.

                                                1. 2

                                                  Yes. That is correct, trust needs to be earned. During the years I worked on privacy at Google, I liked to remind my colleagues of this. It’s easy to forget it when you’re inside an organization like that, and surrounded by people who share not only your background knowledge but also your biases.

                                              2. 9

                                                While the timing might not have been the best, I would overall be on Cloudflare’s side on this. When would the right time to release this be? If Cloudflare had waited another 6-12 months, I would expect them to release a pretty much identical response then as well. And I seriously doubt that their actual actions and their associated risks would actually be different.

                                                And as ISPs keep showing over and over, statements like “we do plan to implement RPKI, with caution, but have no ETA yet” all too often mean that nothing will every happen without efforts like what Cloudflare is doing here.


                                                Additionally,

                                                If we simply filtered invalid routes that we get from transit it is too late and the route is blocked. This is marginally better than routing to somewhere else (some attacker) but it still means a black hole in the Internet. So we need our transit providers sending only valid routes, and if they are doing that we suddenly need to do very little.

                                                Is some really suspicious reasoning to me. I would say that black hole routing the bogus networks is in every instance significantly rather than marginally better than just hoping that someone reports it to them so that they can then resolve it manually.

                                                Their transit providers should certainly be better at this, but that doesn’t remove any responsibility from the ISPs. Mistakes will always happen, which is why we need defense in depth.

                                                1. 6

                                                  Their argument is a bit weak in my personal opinion. The reason in isolation makes sense: We want to uphold network reliability during a time when folks need internet access the most. I don’t think anyone can argue with that; we all want that!

                                                  However they use it to excuse not doing anything, where they are actually in a situation where not implementing RPKI and implementing RPKI can both reduce network reliability.

                                                  If you DO NOT implement RPKI, you allow route leaks to continue happening and reduce the reliability of other networks and maybe yours.

                                                  If you DO implement RPKI, sure there is a risk that something goes wrong during the change/rollout of RPKI and network reliability suffers.

                                                  So, with all things being equal, I would chose to implement RPKI, because at least with that option I would have greater control over whether or not the network will be reliable. Whereas in the situation of NOT implementing, you’re just subject to everyone else’s misconfigured routers.

                                                  Disclosure: Current Cloudflare employee/engineer, but opinions are my own, not employers; also not a network engineer, hopefully my comment does not have any glaring ignorance.

                                                  1. 4

                                                    Agreed. A&A does have a point regarding Cloudflare’s argumentum in terrorem, especially the name and shame “strategy” via their website as well as twitter. Personally, I think is is a dick move. This is the kind of stuff you get as a result:

                                                    This website shows that @VodafoneUK are still using a very old routing method called Border Gateway Protocol (BGP). Possible many other ISP’s in the UK are doing the same.

                                                    1. 1

                                                      I’m sure the team would be happy to take feedback on better wording.

                                                      The website is open sourced: https://github.com/cloudflare/isbgpsafeyet.com

                                                      1. 1

                                                        The website is open sourced: […]

                                                        There’s no open source license in sight so no, it is not open sourced. You, like many other people confuse and/or conflate anything being made available on GitHub as being open source. This is not the case - without an associated license (and please don’t use a viral one - we’ve got enough of that already!), the code posted there doesn’t automatically become public domain. As it stands, we can see the code, and that’s that!

                                                        1. 7

                                                          There’s no open source license in sight so no, it is not open sourced.

                                                          This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed. I’ll raise that internally.

                                                          You, like many other people confuse and/or conflate anything being made available on GitHub as being open source.

                                                          You are aggressively assuming malice or stupidity. Please don’t do that. I am quite sure this is just a mistake nevertheless I will ask internally.

                                                          1. 1

                                                            There’s no open source license in sight so no, it is not open sourced.

                                                            This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed.

                                                            I don’t care either way - not everything has to be open source everywhere, i.e. a website. I was merely stating a fact - nothing else.

                                                            You are aggressively […]

                                                            Not sure why you would assume that.

                                                            […] assuming malice or stupidity.

                                                            Neither - ignorance at most. Again, this is purely statement of a fact - no more, no less. Most people know very little about open source and/or nothing about licenses. Otherwise, GitHub would not have bother creating https://choosealicense.com/ - which itself doesn’t help the situation much.

                                                          2. 1

                                                            It’s true that there’s no license so it’s not technically open-source. That being said I think @jamesog’s overall point is still valid: they do seem to be accepting pull requests, so they may well be happy to take feedback on the wording.

                                                            Edit: actually, it looks like they list the license as MIT in their package.json. Although given that there’s also a CloudFlare copyright embedded in the index.html, I’m not quite sure what to make of it.

                                                            1. -1

                                                              If part of your (dis)service is to publically name and shame ISPs, then I very much doubt it.

                                                    2. 2

                                                      While I think that this is ultimately a shit response, I’d like to see a more well wrought criticism about the centralized signing authority that they mentioned briefly in this article. I’m trying to find more, but I’m not entirely sure of the best places to look given my relative naïvete of BGP.

                                                      1. 4

                                                        So as a short recap, IANA is the top level organization that oversees the assignment of e.g. IP addresses. IANA then delegates large IP blocks to the five Regional Internet Registries, AFRINIC, APNIC, ARIN, LACNIC, and RIPE NCC. These RIRs then further assigns IP blocks to LIRs, which in most cases are the “end users” of those IP blocks.

                                                        Each of those RIRs maintain an RPKI root certificate. These root certificates are then used to issue certificates to LIRs that specify which IPs and ASNs that LIR is allowed to manage routes for. Those LIR certificates are then used to sign statements that specify which ASNs are allowed to announce routes for the IPs that the LIR manages.

                                                        So their stated worry is then that the government in the country in which the RIR is based might order the RIR to revoke a LIR’s RPKI certificate.


                                                        This might be a valid concern, but if it is actually plausible, wouldn’t that same government already be using the same strategy to get the RIR to just revoke the IP block assignment for the LIR, and then compel the relevant ISPs to black hole route it?

                                                        And if anything this feels even more likely to happen, and be more legally viable, since it could target a specific IP assignment, whereas revoking the RPKI certificate would make the RoAs of all of the LIRs IP blocks invalid.

                                                        1. 1

                                                          Thanks for the explanation! That helps a ton to clear things up for me, and I see how it’s not so much a valid concern.

                                                      2. 1

                                                        I get a ‘success’ message using AAISP - did something change?

                                                        1. 1

                                                          They are explicitly dropping the Cloudflare route that is being checked.

                                                      1. 21

                                                        This would read a lot better without the divisive “If you:” section.

                                                        1. 15

                                                          Yeah, I agree with this. I spend plenty of free time writing code — because I enjoy it. That’s also why I made it my job; I recognize that’s not true for everyone, and some people are working a job they aren’t interested in so that they can have a decent upper-middle class lifestyle. By all means feel free to do that, it’s understandable enough. But it feels aimlessly bitter to imply that having technical hobbies is a Machiavellian bid for power over your coworkers.

                                                          We recognize that your willingness to allow your employment to penetrate deeply into your personal life means that you will inevitably become our supervisor.

                                                          Maybe you liked writing code in the first place, and thus made it your employment? Even the inverse is possible: you could’ve made a financial decision to pursue software engineering first, and then afterwards discovered you truly found it interesting.

                                                        1. -1

                                                          The best SRE recommendation around Memcached is not to use it at all:

                                                          • it’s pretty much abandonware at this point
                                                          • there is no built-in clustering or any of the HA features that you need for reliability

                                                          Don’t use memcached, use redis instead.

                                                          (I do SRE and systems architecture)

                                                          1. 30

                                                            … there was literally a release yesterday, and the project is currently sponsored by a little company called …[checks notes]…. Netflix.

                                                            Does it do everything Redis does? No. Sometimes having simpler services is a good thing.

                                                            1. 11

                                                              SRE here. Memcached is great. Redis is great too.

                                                              HA has a price (Leader election, tested failover, etc). It’s an antipattern to use HA for your cache.

                                                              1. 9

                                                                Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis. The HA story is usually provided by smart proxies (twemcache and others).

                                                                1. 8

                                                                  It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.

                                                                  1. 3

                                                                    seems like hazelcast is the successor of memcached https://hazelcast.com/use-cases/memcached-upgrade/

                                                                    1. 3

                                                                      I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.

                                                                      1. 1

                                                                        this comment is ridiculous

                                                                        1. 1

                                                                          it’s pretty much abandonware at this point

                                                                          i was under the impression that facebook uses it extensively, i guess redis it is.

                                                                          1. 10

                                                                            Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.

                                                                            Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.

                                                                            If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.

                                                                            1. 1

                                                                              Most folks run multiple Redis per node (cpu minus one is pretty common) just as an FYI so the the “single process thing” is probably moot.

                                                                              1. 5

                                                                                N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.

                                                                                1. 2

                                                                                  Yup. Totally. I just didn’t want people to think that there’s all of these idle CPUs sitting out there. Super easy to multiplex across em.

                                                                                  Once you started wanting to do more complex things / structures / caching policies then it may make sense to redis

                                                                                  1. 1

                                                                                    Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.

                                                                        1. 3

                                                                          Can any lobsters using HTTPie explain what drew them away from curl or what about curl pushed them to HTTPie?

                                                                          1. 8

                                                                            I haven’t been using it for long but for me the nicest thing so far is being able to see the whole response: headers, body, and all of it syntax-highlighted by default. The command-line UI is a little nicer as well, more clear and intuitive.

                                                                            It will probably not replace my use of curl in scripts for automation, nor will it replace my use of wget to fetch files.

                                                                            Now if someone took this and built an insomnia-like HTTP client usable from a terminal window, then we’d really have something cool.

                                                                            1. 1

                                                                              I’m guessing you mean this Insomnia. Looks cool. Good example of an OSS product, too, given most features people would want are in free one.

                                                                            2. 4

                                                                              I use both depending on circumstance (more complex use cases are better suited for curl IMO), but the significantly simpler, shortened syntax for HTTPie as well as the pretty printing + colorization by default for JSON APIs is pretty nice.

                                                                              1. 3

                                                                                I wouldn’t say I’d been ‘pushed away’ from curl, I still use curl and wget regularly, but httpie’s simpler syntax for request data and automatic coloring and formatting of JSON responses makes it a great way to make quick API calls.

                                                                                1. 3

                                                                                  I like short :8080 for local host syntax.

                                                                                  1. 3

                                                                                    It’s all in how you like to work. Personally I enjoy having an interactive CLI with help and the like, and the ability to build complex queries piecemeal in the interactive environment.

                                                                                    1. 3

                                                                                      Sensible defaults and configurability.

                                                                                      1. 2

                                                                                        I need a command line HTTP client rarely enough that I never managed to learn curl command line flags. I always have to check the manual page, and it always takes me a while to find what I want there. I can do basic operations with HTTPie without thinking twice and the bits I need a refresher on — usually the syntaxes for specifying query parameters, form fields or JSON object fields — are super fast to locate in http --help.

                                                                                        1. 1

                                                                                          curl is the gold standard for displaying almost anything including tls and cert negotiation. i use bat mostly now though for coloured output and reasonable json support. https://github.com/astaxie/bat

                                                                                        1. 15

                                                                                          Main thing is that safe, portable software runs fast enough to be a good default. Twenty years ago, just having bounds checks and GC made the apps unbearably slow. It does today for performance-critical apps. Fortunately, the tech for finding or proving absence of problems in unsafe code is also extremely good compared to then.

                                                                                          1. 7

                                                                                            I’m not sure about it. 20 years ago we also had Java, C#, Python, and PHP. Their roles, and perceived performance haven’t changed that much. Even though their implementations and hardware they run on have improved dramatically, we now expect more from them, and we’re throwing more code at them.

                                                                                            1. 6

                                                                                              All of those language runtimes have seen dramatic performance improvements, as has the hardware available to run them. In 2000, writing a 3D game in C# would’ve been insane; today it’s just Unity.

                                                                                              1. 2

                                                                                                At one point, Java was about 10-15x slower than C or C++ apps. The well-written apps in native code were always faster with less resources than the others you mentioned. That’s both load and run time. I always noticed a difference on both a 200MHz PII w/ 64MB of RAM at home and the 400MHz PIII w/ 128MB RAM at another place. Hell, going from just Visual Studio 6 to .NET bogged things down on that PIII. Visual Basic 6 was about as fast as REPL development with everything taking about a second.

                                                                                                We do have a trend where the hardware got faster making even slower software seem similar or faster. The responsiveness of native apps was usually better unless they were of the bloated variety. Modern apps are often slower in general with more resources, too. If they ran like old ones, I could buy a much cheaper computer with less resources. Good news is, besides Firefox and VLC, I have plenty of lightweight apps to choose from on Linux that make my computer run snappy like it did 10-20 years ago. :)

                                                                                                1. 3

                                                                                                  A conjecture, people upgrade to the latest hardware to get there BEFORE the devs do. Then soon, because the devs have also upgraded they write FOR these new machines. From a game theoretic standpoint, the cpu vendors should give the Carmacks of the world the fastest systems they can muster.

                                                                                                  1. 1

                                                                                                    This was Kay’s idea, but processing hardware doesn’t improve at a fast enough rate to justify it anymore. Unless your project is mired deep in core development for a decade, customers won’t have a twice-as-fast machine by the time it’s released.

                                                                                                  2. 3

                                                                                                    Yeah, we’re sort of in a golden age wherein, when it comes to bloat, the easiest way for a developer to avoid it does not involve avoiding useful language features like garbage collection but instead just to avoid being sloppy with time complexity & space complexity.

                                                                                                    I get the impression that a lot of that is actually due to improvements in compiler tech (especially JITs) and GC design tech, rather than hardware. The division between scripting language, bytecode language, and compiled language is fuzzier because most scripting languages get bytecode-compiled and most bytecode languages get JIT compiled, so you can take arbitrarily high-level code and throw out a lot of the runtime overhead for it, basically making high-level code run more like low-level code, and when you do that, you can write higher level code in your actual language implementation too, which can make it easier to write complicated optimizations and such.

                                                                                                    I’m not really familiar with what specific optimizations might have been introduced, though, and all I know about the advances in GC tech is that even people who think they know about GC tech are apparently generally 20 years out of date on it…

                                                                                                    It’s hard to imagine “python for data science” in 1999. It’s even harder to imagine something like Julia in 1999 – a high-level high-performance garbage-collected language with strong implicit types and a REPL, intended for distributed statistical computing. It’s not that such things were impossible in ’99, but it was very much limited to weird academic projects in lisp / forth / smalltalk / whatever.

                                                                                              1. 5

                                                                                                I think Gruber is basically right re: the traditional incentives of open source being misaligned with producing high-quality user interfaces:

                                                                                                Talented programmers who work long full-time hours crafting software need to be paid. That means selling software. Remember the old open source magic formula — that one could make money giving away software by selling “services and support”? That hasn’t happened — in terms of producing well-designed end user software — and it’s no wonder why. In Raymond’s own words, the goal is:

                                                                                                software that works so well, and is so discoverable to even novice users, that they don’t have to read documentation or spend time and mental effort to learn about it.

                                                                                                It’s pretty hard to sell “services and support” for software that fits that bill. The model that actually works is selling the software itself.

                                                                                                That being said, I’ve started using Pop_OS! from System76 recently, and it feels very polished in a way that I’m not used to with traditional Linux distros, where often the choice has felt like:

                                                                                                • Build your own lightsaber from scratch, or
                                                                                                • Use Ubuntu and get opted into whatever way Canonical is trying to monetize this week (e.g. shipping your searches to Amazon, running dynamic ads in the MOTD, etc).

                                                                                                It seems like System76 taking the Apple approach — that is, making money by selling hardware — is part of the reason they’ve found their footing there.

                                                                                                1. 2

                                                                                                  probably worth noting that 99% of popos IS ubuntu

                                                                                                  1. 3

                                                                                                    I mean, System76 built their own UI and replaced Canonical’s, which is kind of to Gruber’s point.

                                                                                                    (And they ripped out the weirder Canonical stuff like ads + tracking.)

                                                                                                    And Ubuntu itself is based on Debian unstable… ;)

                                                                                                1. 4

                                                                                                  “State”

                                                                                                  1. 3

                                                                                                    To support this architecture pattern, your State store should have the following properties:

                                                                                                    • Consistent. Since you’re using it to dedupe and correctly order messages from your event queue.
                                                                                                    • Available. Since in this architecture pattern all requests hit the State store at least twice, and possibly N times due to fanout on your event queue, it’s critical that this single point of failure be highly available.
                                                                                                    • Partition-tolerant. Since you’re operating in a distributed environment, your serverless functions must be able to tolerate network partitions from the State store and still ensure application consistency without impacting availability.

                                                                                                    We call these “CAP databases” (for Consistent, Available, and Partition-tolerant databases), and it’s best-practice for your State store to support all three of these properties.

                                                                                                    :P

                                                                                                    1. 1

                                                                                                      Also are they implicitly assuming, specifically with some of the phrasing around the third factor, that the CALM theorem always holds? Seems like their out of order requirements are a little limiting.

                                                                                                    1. 10

                                                                                                      It makes me uncomfortable many people don’t even consider Git as a standalone software. For many it’s just an implementation detail of their Gitlab or other frontend of the month. I’m not sure what to think about it.

                                                                                                      1. 2

                                                                                                        That’s kind of what made me try to dig into Git.

                                                                                                        I also discovered annexes in Git, which are pretty neat (here), but which I deemed out of scope for this article.

                                                                                                        1. 2

                                                                                                          Thought it might be worth linking to the source :^)

                                                                                                          1. 2

                                                                                                            git-annex is quite cool, and an interesting alternative to Git LFS that is easier to maintain yourself (doesn’t require a custom server, and supports multiple servers rather than requiring a canonical one). That being said it makes sense you left it out of this article — it’s not actually “barebones” git, it’s a separate package that’s maintained separately (almost entirely by its original author, Joey Hess).

                                                                                                            1. 1

                                                                                                              annexes look pretty cool, I wouldn’t mind seeing an article with your workflow with that.

                                                                                                              1. 1

                                                                                                                Never seriously tried them, but I might do, and write something on that topic.

                                                                                                                I’m already gonna write an article on man, and writing documentation pages.

                                                                                                          1. 13

                                                                                                            I originally also suppressed this output on non-terminal devices, but then prog | less will still hang without a message, which is not great. I would encourage suppressing this output with a -q or -quiet flag.

                                                                                                            STDIN might be a terminal while STDOUT or STDERR are not – you have different FDs, it is not a single STDIO device.

                                                                                                            For example in C, you can test particular FDs this way:

                                                                                                            #include <stdio.h>
                                                                                                            #include <unistd.h>
                                                                                                            
                                                                                                            void check(int fd) {
                                                                                                            	if (isatty(fd))  printf("FD %d is a terminal\n", fd);
                                                                                                            	else             printf("FD %d is a file or a pipe\n", fd);
                                                                                                            }
                                                                                                            
                                                                                                            void main() {
                                                                                                            	check(fileno(stdin));
                                                                                                            	check(fileno(stdout));
                                                                                                            	check(fileno(stderr));
                                                                                                            }
                                                                                                            

                                                                                                            Output:

                                                                                                            $ make is-a-tty
                                                                                                            cc     is-a-tty.c   -o is-a-tty
                                                                                                            
                                                                                                            $ ./is-a-tty 
                                                                                                            FD 0 is a terminal
                                                                                                            FD 1 is a terminal
                                                                                                            FD 2 is a terminal
                                                                                                            
                                                                                                            $ ./is-a-tty | cat
                                                                                                            FD 0 is a terminal
                                                                                                            FD 1 is a file or a pipe
                                                                                                            FD 2 is a terminal
                                                                                                            
                                                                                                            $ echo | ./is-a-tty
                                                                                                            FD 0 is a file or a pipe
                                                                                                            FD 1 is a terminal
                                                                                                            FD 2 is a terminal
                                                                                                            
                                                                                                            $ echo | ./is-a-tty | cat
                                                                                                            FD 0 is a file or a pipe
                                                                                                            FD 1 is a file or a pipe
                                                                                                            FD 2 is a terminal
                                                                                                            
                                                                                                            $ ./is-a-tty 2>/dev/null 
                                                                                                            FD 0 is a terminal
                                                                                                            FD 1 is a terminal
                                                                                                            FD 2 is a file or a pipe
                                                                                                            

                                                                                                            I would not recommend messing the STDOUT/STDERR with superfluous messages if there is no error.

                                                                                                            Indeed, there is Rule of Silence:

                                                                                                            When a program has nothing surprising to say, it should say nothing.

                                                                                                            Waiting for an input from a file or pipe is expected non-surprising behavior. Only when waiting for an input from the terminal, it could make sense to print some prompt or guide the user what he should do.

                                                                                                            1. 2

                                                                                                              I forgot you can can run isatty() on stdin, too. Previously it did check this for stdout, but I removed his earlier (isTerm is the result of isatty(stdout)).

                                                                                                              I’ll update the program and article; thanks.

                                                                                                              1. 3

                                                                                                                isatty on stdin is good to test if your users made a mistake, and then isatty() again on stderr to make sure your users are reading your message!

                                                                                                              2. 1

                                                                                                                Strictly speaking, this is POSIX not C. isatty has been broken in the past on Windows with some distributions of GCC, I am unsure what the status is these days.

                                                                                                              1. 4

                                                                                                                Can one of these run linux?

                                                                                                                1. 9

                                                                                                                  Some older models can. Your best resource for running Linux on a Surface device is https://www.reddit.com/r/SurfaceLinux/

                                                                                                                  The WSL works well enough for me and because of that I never considered installing Linux on it. In my opinion there are other machines that will get you a much better experience running Linux than these ones. The value of the Surface form-factor is how it can go from a laptop kind of machine with keyboard and landscape screen, into a drawing machine with pen, and into a tablet in portrait screen. This versatility requires an OS and apps that can accommodate these various paradigms and I don’t think there is any desktop environment for Linux that provides these type of usage. From what I’ve read, the touch screen basically becomes a mouse. Screen rotation sort of works but some apps don’t respond well to it. Keyboard hot plug is not fail proof and depending on the kernel it simply doesn’t work. Also, I don’t believe the built-in LTE is supported but I haven’t checked these in ages. If I was to run Linux I’d get either a thinkpad or a system76 machine and probably be happier than running it on a Surface. But that is just my opinion.

                                                                                                                  1. 2

                                                                                                                    +1 to WSL on Surface devices. I’ve been using a Surface Book as my primary personal laptop for a few years, and it pretty much exists to run Xming (an X11 server for Windows), urxvt, Chrome, and a handful of PC games. It helps that I do all my development with terminal-based tools like Neovim, rather than trying to mix Linux tooling in the WSL environment with native Windows stuff; when I’ve tried to do that it’s generally worked but felt clunky (and apparently there are some issues with having WSL write to the Windows-managed portion of the filesystem and vice versa, although I’ve never personally run into problems).

                                                                                                                    Feels like having a very, very lightweight VM running Ubuntu… And apparently that’s what WSL2 actually is.

                                                                                                                  2. 1

                                                                                                                    Surface Pro X will be able to run linux on arm in WSL 2. I’m not sure otherwise…

                                                                                                                    1. 1

                                                                                                                      That was my first question too. The author is using the WSL layer, which I guess could work well enough, but I’d still miss i3/tiling window managers.

                                                                                                                      I had a 2nd generation Surface Pro dual booting Windows/Linux way back in the day. But I never really liked the experience of it not really being usable on my lap like a laptop with it’s weird keyboard and ended up selling it on one of my own nomadic adventures

                                                                                                                      1. 1

                                                                                                                        FWIW you could run a full-screen X desktop on your Windows desktop using Xming/X410/etc. with i3 running under WSL. Getting multiple desktops and keybindings to play nice might take some fiddling, and I have no idea how those X servers would perform under x86 emulation, but all the pieces are there.

                                                                                                                    1. 2

                                                                                                                      Work:

                                                                                                                      • We’re partially code-frozen for the holidays, so only super critical stuff is allowed to go in before the new year. So, a few small fixes (pushing to get most of them out tonight, sigh), but otherwise mostly planning and prototyping improvements that won’t land until after the holidays.

                                                                                                                      Personal:

                                                                                                                      • I reformatted an old Ubuntu server I built at home, hooked it up to a monitor and keyboard, installed the latest version of Pop_OS! on it and am using it as a desktop. Pop is really nice! But that kicked off a yak-shave project of rewriting my config management setup on Linux (e.g. installing + configuring packages, not just symlinking dotfiles), so still tinkering with that this week.
                                                                                                                      • Planning winter travel — I’m meeting a friend in Flores, Indonesia for about a week, and working remotely from Tokyo for a week as well. This is somewhat complicated by the fact that my sister is due to have a son sometime in the next couple weeks; most of my family is Orthodox Jewish (I’m an atheist), so they’re hoping I could make it to the bris.
                                                                                                                      1. 1

                                                                                                                        Personally I think diceware+password manager is the ideal approach:

                                                                                                                        • The password to unlock your password manager has to be memorable, since you don’t have access to your password manager’s passwords without it.
                                                                                                                        • For everything else, even with diceware I’m not going to remember unique passwords for each site/app I use, so I’d rather just use long, completely random strings.

                                                                                                                        Hopefully your password manager isn’t using SHA256.

                                                                                                                        I also use diceware for laptop/desktop logins (with disk encryption ofc), since a password manager is awkward to use to log in — you have to carefully transcribe from a second device — and I have few enough personal machines that I can remember diceware passwords for all of them.

                                                                                                                        1. 6

                                                                                                                          My PC is named Phoenix because it has basically been upgraded in bits and pieces, Ship of Theseus style, continuously since I originally built it in 2006. It is currently incarnated as a Ryzen R7 1700 with an ASRock motherboard, 16 GB of RAM, an NVidia GTX 660 GPU, and a 500 GB SSD. (Most of my actual data lives on a NAS.)

                                                                                                                          Wish I could do that with a laptop. Building your own PC is cheaper and better than buying off the shelf IMO, but I also enjoy doing it even now.

                                                                                                                          1. 2

                                                                                                                            how good is AMD compared to Intel processors ? and for gaming in general ?

                                                                                                                            1. 2

                                                                                                                              AMD has really stepped up recently. They’ve been beating Intel on performance and cost across their most recent line of CPUs. https://www.youtube.com/watch?v=stM2CPF9YAY https://www.pcmag.com/review/371925/amd-ryzen-9-3950x

                                                                                                                              1. 8

                                                                                                                                It’s also quite nice that AMD isn’t constantly changing their CPU sockets; every Ryzen generation has kept the same socket so far, and so (assuming your motherboard manufacturer keeps its firmware up-to-date) you can theoretically just keep upgrading the CPU as better ones come out… Whereas Intel seems to change their sockets every 1-2 years, locking you into specific generations unless you’re willing to buy a new motherboard and basically reassemble the entire PC from scratch. With the Ryzen line so far a CPU upgrade has been about as easy as a GPU upgrade, whereas with Intel you’re often stuck with whatever processor generation you bought into.

                                                                                                                                That being said, apparently AMD’s going to do a socket change in 2020, so there’s not a lot of difference if you’re buying right now. If you’re willing to wait for the next socket to come out, they’ll probably support it for a lot longer than Intel, though.

                                                                                                                                Currently AMD has Intel pretty soundly beat on price for performance, and in some benchmarks (here’s the Anandtech benchmarking for the 3950x, similar to the PCMag one linked above, but I find Anandtech even more thorough, and they test specific games as well) they beat Intel even when ignoring price.

                                                                                                                                My current gaming PC is Intel; I built it a while ago though, before Ryzen came out. My Linux desktop is AMD, and I’d probably go with AMD if I were rebuilding the gaming PC today. (And at some point presumably I’ll do it, since I’ll have to replace the gaming PC’s motherboard anyway due to the socket changing.)

                                                                                                                              2. 2

                                                                                                                                Every benchmark I’ve seen has them quite a long way ahead on price/performance.

                                                                                                                                My current PC build is running ubuntu on the cheapest available Threadripper (24 cores) and cheapest RTX-class GPU, and came in at about 2/3rds the price of my $WORK macbook pro.

                                                                                                                                It absolutely screams along (eg: $WORK test suite on my 15’ macbook pro took 48m, on desktop it takes 9m, and the desktop remains responsive while it’s running).

                                                                                                                                I’ve never been super interested in current-gen AAA titles, but the few I’ve tried have all worked on high settings, and it drives my VR setup very easily.

                                                                                                                                1. 2

                                                                                                                                  In my experience, AMD’s Ryzen line of processors is basically 95% of the performance for 75% of the price of their Intel equivalents. This applies to gaming too. Their basic mid-range desktop CPU, the X600 (where X = 1, 2 or 3 depending on generation), is also 6 cores instead of the more typical 4, so for multithreaded workloads, like compiling code, it’s more like 110% of the performance for 75% of the price. I am a thorough fan.

                                                                                                                                  Because it’s a good story… In 2011 AMD released their then-next-gen CPU architecture, which turned out to be a giant architectural dead-end. AMD CPU’s were just tragically bad for a long long time after that, as they had to develop an entirely model of processor core without going broke in the process, and they’re a much smaller company than Intel that can’t afford to just eat development costs and get on with life. Intel naturally took the opportunity to become the de-facto standard in all but the lowest-performance markets, and made money hand over fist. In mid 2017 AMD finally managed to dig themselves out of their technological hole and released the Ryzen line of CPU’s which are once again competitive. And lo and behold, since they did that Intel CPU’s have quite suddenly gotten cheaper, more powerful and more flexible as well.

                                                                                                                              1. 2

                                                                                                                                Presumably not fixed on macOS though, due to the ancient version of Bash it ships with? Although I guess they’ve switched to zsh by default with Catalina…