1. 2

    These structures shells are, in my humble opinion, more or less useless. If your script/idea is large/complex enough that it would benefit by using nushell instead of bash/posix sh, then you may as well write it in a “real” language like python or ruby. On the other hand, for interactive usage I personally only write short snippets of sh code at a time, so I wouldn’t really benefit from using a shell with structured data.

    1. 6

      I definitely disagree here. Just from all of the cases I’ve seen in my career of the developer of shell scripts having to deal with unexpected spaces in output, or the cognitive overhead of dealing with spaces and the problems they cause. Some of it comes naturally to folks, but people rarely work in a vacuum, so someone working with your shell scripts on your team likely do have to do some extra thinking to account for spaces (or figure out where they came from when debugging).

      I had the luxury of using PowerShell at an old job, and honestly I still miss it. As an above post mentions, it’s very difficult for something like that to exist on UNIX-likes, since the real benefits of PowerShell are being able to use WMI objects. Honestly, I think nu suffers from the inability to boil the ocean in this regard.

      1. 4

        I agree with you on this, in general. However, I’ve seen people who were awk wizards and could crank out an amazing one-liner very quickly to do something super useful, but one-off. Structured data might make that kind of shell mastery a little easier to develop. Or not, I’m not sure, just thinking aloud.

        1. 2

          If your script/idea is large/complex enough that it would benefit by using nushell instead of bash/posix sh, then you may as well write it in a “real” language like python or ruby.

          ‘Interactive language, or language with good data structures?’ is a real tradeoff when you have only Posixshell and Python/Ruby to choose from. Posixshell is good for interactive usage; Python and Ruby have expressive, predictable, and easy-to-manipulate data structures, namely lists and dicts. But Nushell can make this a false tradeoff, because there is is no reason a language could not be interactive-friendly (shell-like) and use structured data. At that point ‘shell or script’ becomes a question of ‘how complex is the problem’, instead of ‘how hard does the language make it?’

          The gains lie wherever solutions are only hard to write at the command line because of Posixshell’s incidental complexity. Those solutions we could dash off at the command line if only we had a better language. Nushel wants to make that possible.

          When your shell language has both interactivity and a powerful universal data structure, you’ll be able to solve so many more things ad-hoc in your shell, without needing to bail out to a scripting language.

        1. 11

          Other commenters have said much of what I was thinking reading this page.

          However, I do agree with this page about how consolidation under Cloudflare is kind of a scary prospect. Really, from a business perspective, they’ve made their offerings very attractive, and have done an excellent job marketing. But yes, all that consolidation is very alarming from a privacy perspective. Cloudflare does a good job promoting privacy around the internet, and coming into cloudflare, but like many companies, how that data is used internally is a black box.

          The other point that I wish was levied at more than just Cloudflare — the point about the VPN market. Cloudflare has adopted tactics of the popular VPN providers that are advertising services at folks not as familiar with tech, selling them what’s almost privacy snakeoil. I’d like to see that point echoed far and wide, about all VPN providers, so that their potential customers actually understand what they’re buying.

          1. 27

            It’s worth linking to A&A’s (a British ISP) response to this: https://www.aa.net.uk/etc/news/bgp-and-rpki/

            1. 16

              Our (Cloudflare’s) director of networking responded to that on Twitter: https://twitter.com/Jerome_UZ/status/1251511454403969026

              there’s a lot of nonsense in this post. First, blocking our route statically to avoid receiving inquiries from customers is a terrible approach to the problem. Secondly, using the pandemic as an excuse to do nothing, when precisely the Internet needs to be more secure than ever. And finally, saying it’s too complicated when a much larger network than them like GTT is deploying RPKI on their customers sessions as we speak. I’m baffled.

              (And a long heated debate followed that.)

              A&A’s response on the one hand made sense - they might have fewer staff available - but on the other hand RPKI isn’t new and Cloudflare has been pushing carriers towards it for over a year, and route leaks still happen.

              Personally as an A&A customer I was disappointed by their response, and even more so by their GM and the official Twitter account “liking” some very inflammatory remarks (“cloudflare are knobs” was one, I believe). Very unprofessional.

              1. 15

                Hmm… I do appreciate the point that route signing means a court can order routes to be shut down, in a way that wouldn’t have been as easy to enforce without RPKI.

                I think it’s essentially true that this is CloudFlare pushing its own solution, which may not be the best. I admire the strategy of making a grassroots appeal, but I wonder how many people participating in it realize that it’s coming from a corporation which cannot be called a neutral party?

                I very much believe that some form of security enhancement to BGP is necessary, but I worry a lot about a trend I see towards the Internet becoming fragmented by country, and I’m not sure it’s in the best interests of humanity to build a technology that accelerates that trend. I would like to understand more about RPKI, what it implies for those concerns, and what alternatives might be possible. Something this important should be a matter of public debate; it shouldn’t just be decided by one company aggressively pushing its solution.

                1. 4

                  This has been my problem with a few other instances of corporate messaging. Cloudflare and Google are giant players that control vast swathes of the internet, and they should be looked at with some suspicion when they pose as simply supporting consumers.

                  1. 2

                    Yes. That is correct, trust needs to be earned. During the years I worked on privacy at Google, I liked to remind my colleagues of this. It’s easy to forget it when you’re inside an organization like that, and surrounded by people who share not only your background knowledge but also your biases.

                2. 9

                  While the timing might not have been the best, I would overall be on Cloudflare’s side on this. When would the right time to release this be? If Cloudflare had waited another 6-12 months, I would expect them to release a pretty much identical response then as well. And I seriously doubt that their actual actions and their associated risks would actually be different.

                  And as ISPs keep showing over and over, statements like “we do plan to implement RPKI, with caution, but have no ETA yet” all too often mean that nothing will every happen without efforts like what Cloudflare is doing here.


                  Additionally,

                  If we simply filtered invalid routes that we get from transit it is too late and the route is blocked. This is marginally better than routing to somewhere else (some attacker) but it still means a black hole in the Internet. So we need our transit providers sending only valid routes, and if they are doing that we suddenly need to do very little.

                  Is some really suspicious reasoning to me. I would say that black hole routing the bogus networks is in every instance significantly rather than marginally better than just hoping that someone reports it to them so that they can then resolve it manually.

                  Their transit providers should certainly be better at this, but that doesn’t remove any responsibility from the ISPs. Mistakes will always happen, which is why we need defense in depth.

                  1. 6

                    Their argument is a bit weak in my personal opinion. The reason in isolation makes sense: We want to uphold network reliability during a time when folks need internet access the most. I don’t think anyone can argue with that; we all want that!

                    However they use it to excuse not doing anything, where they are actually in a situation where not implementing RPKI and implementing RPKI can both reduce network reliability.

                    If you DO NOT implement RPKI, you allow route leaks to continue happening and reduce the reliability of other networks and maybe yours.

                    If you DO implement RPKI, sure there is a risk that something goes wrong during the change/rollout of RPKI and network reliability suffers.

                    So, with all things being equal, I would chose to implement RPKI, because at least with that option I would have greater control over whether or not the network will be reliable. Whereas in the situation of NOT implementing, you’re just subject to everyone else’s misconfigured routers.

                    Disclosure: Current Cloudflare employee/engineer, but opinions are my own, not employers; also not a network engineer, hopefully my comment does not have any glaring ignorance.

                    1. 4

                      Agreed. A&A does have a point regarding Cloudflare’s argumentum in terrorem, especially the name and shame “strategy” via their website as well as twitter. Personally, I think is is a dick move. This is the kind of stuff you get as a result:

                      This website shows that @VodafoneUK are still using a very old routing method called Border Gateway Protocol (BGP). Possible many other ISP’s in the UK are doing the same.

                      1. 1

                        I’m sure the team would be happy to take feedback on better wording.

                        The website is open sourced: https://github.com/cloudflare/isbgpsafeyet.com

                        1. 1

                          The website is open sourced: […]

                          There’s no open source license in sight so no, it is not open sourced. You, like many other people confuse and/or conflate anything being made available on GitHub as being open source. This is not the case - without an associated license (and please don’t use a viral one - we’ve got enough of that already!), the code posted there doesn’t automatically become public domain. As it stands, we can see the code, and that’s that!

                          1. 7

                            There’s no open source license in sight so no, it is not open sourced.

                            This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed. I’ll raise that internally.

                            You, like many other people confuse and/or conflate anything being made available on GitHub as being open source.

                            You are aggressively assuming malice or stupidity. Please don’t do that. I am quite sure this is just a mistake nevertheless I will ask internally.

                            1. 1

                              There’s no open source license in sight so no, it is not open sourced.

                              This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed.

                              I don’t care either way - not everything has to be open source everywhere, i.e. a website. I was merely stating a fact - nothing else.

                              You are aggressively […]

                              Not sure why you would assume that.

                              […] assuming malice or stupidity.

                              Neither - ignorance at most. Again, this is purely statement of a fact - no more, no less. Most people know very little about open source and/or nothing about licenses. Otherwise, GitHub would not have bother creating https://choosealicense.com/ - which itself doesn’t help the situation much.

                            2. 1

                              It’s true that there’s no license so it’s not technically open-source. That being said I think @jamesog’s overall point is still valid: they do seem to be accepting pull requests, so they may well be happy to take feedback on the wording.

                              Edit: actually, it looks like they list the license as MIT in their package.json. Although given that there’s also a CloudFlare copyright embedded in the index.html, I’m not quite sure what to make of it.

                              1. -1

                                If part of your (dis)service is to publically name and shame ISPs, then I very much doubt it.

                      2. 2

                        While I think that this is ultimately a shit response, I’d like to see a more well wrought criticism about the centralized signing authority that they mentioned briefly in this article. I’m trying to find more, but I’m not entirely sure of the best places to look given my relative naïvete of BGP.

                        1. 4

                          So as a short recap, IANA is the top level organization that oversees the assignment of e.g. IP addresses. IANA then delegates large IP blocks to the five Regional Internet Registries, AFRINIC, APNIC, ARIN, LACNIC, and RIPE NCC. These RIRs then further assigns IP blocks to LIRs, which in most cases are the “end users” of those IP blocks.

                          Each of those RIRs maintain an RPKI root certificate. These root certificates are then used to issue certificates to LIRs that specify which IPs and ASNs that LIR is allowed to manage routes for. Those LIR certificates are then used to sign statements that specify which ASNs are allowed to announce routes for the IPs that the LIR manages.

                          So their stated worry is then that the government in the country in which the RIR is based might order the RIR to revoke a LIR’s RPKI certificate.


                          This might be a valid concern, but if it is actually plausible, wouldn’t that same government already be using the same strategy to get the RIR to just revoke the IP block assignment for the LIR, and then compel the relevant ISPs to black hole route it?

                          And if anything this feels even more likely to happen, and be more legally viable, since it could target a specific IP assignment, whereas revoking the RPKI certificate would make the RoAs of all of the LIRs IP blocks invalid.

                          1. 1

                            Thanks for the explanation! That helps a ton to clear things up for me, and I see how it’s not so much a valid concern.

                        2. 1

                          I get a ‘success’ message using AAISP - did something change?

                          1. 1

                            They are explicitly dropping the Cloudflare route that is being checked.

                        1. 14

                          “when the tests all pass, you’re done”

                          Every TDD advocate I have ever met has repeated this verbatim, with the same hollow-eyed conviction.

                          Citation strongly required. If it was something being repeated verbatim it should be all over the internet. But when I search for the phrase in quotes I get.. versions of this blog post.

                          In my experience the first thing TDD teaches is that testing is a process. Every time you want to make a change you write a test first. Where’s the “done” in that?!

                          Forget “every”, and forget “verbatim”, I’d like to see one person advocating for something resembling this thesis.

                          1. 9

                            I don’t know a good way to cite this well, but I have had the same experience as the author with regards to TDD-minded folks. I also want to stress that I’m in a camp that likes having thorough unit tests, but disagrees with TDD for more or less the reasons that the article specified.

                            1. 1

                              Thank you. I definitely appreciate that experience can vary. As long as you’re writing automated tests the details are much less important.

                          1. 1

                            This is honestly an underrated post. I think that they do a great job cutting to the core of language wars, the root of those conversations. Also slightly biased in that I agree very much with how they pick up and play with languages — they like to see if a new language’s quirks or oddities jive with how they like to program (taken from the discussed Zig post).

                            1. 9

                              Another thing.

                              x, err := strconv.ParseInt("not a number", 10, 32)
                              // Forget to check err, no warning
                              doSomething(x)
                              

                              This comes up all the time in critiques of the language. Sure, it’s possible. But, like — I’ve never had to catch this in a code review. In practice it just isn’t something that happens. I dunno. I wish Go had a Result type, too. But this class of error is, in my experience, almost purely theoretical.

                              1. 3

                                I’ve definitely seen this in the wild, both in FOSS projects and at my company. The upside is that popular go linters do catch this, and depending on your editor, this type of check may be enabled by default.

                                That said, I much prefer it when a language disallows cases like this than depending on linters.

                              1. 3

                                I’m surprised that as an SRE guide, this doesn’t mention anything about cache warming, or the trade-offs of using memcached vs something else like redis or couchbase.

                                1. 28

                                  Have done this, can confirm it works. If you have trouble with the political side of things, introduce the strangler as a diagnostic tool or a happy path and start shipping new features under its guise instead of under the legacy system. Arguing for two concurrent system with initially disjoint usecases is easier than a stop-the-world cutover.

                                  1. 3

                                    Strongly seconding. I’ve seen countless person-hours wasted on trying to replace legacy systems wholesale. IT systems are like cities: They grow organically and need to change organically if you want to avoid displacing whole swaths of the population and causing more harm than good.

                                    1. 1

                                      How do you handle downstream consumers of functionality as you strangle the original piece of code? Can’t always force everyone to move to a new API route or use a new class in a library.

                                      1. 8

                                        The best case is not to force them to change anything. In my case, we did exactly as the article mentioned, and slowly transitioned to a transparent proxy. Then slowly we turned on features where API requests were handled by the new code, rather than being proxied to the old code.

                                        1. 4

                                          It’s obviously harder if your API has multiple consumers (some of which you don’t control). One option is to have the proxy expose the same endpoints as the legacy system, though that’s not without its own complications (especially if the technologies are particularly divergent).

                                          1. 3

                                            That’s a political problem, not a technical one. You solve it by building political power in the organisation.

                                            1. 2

                                              Only if the consumers of your API are within your organisation…..

                                              1. 3

                                                For this you need separate gateway service that hosts the API and then forwards to either the new service or the legacy service. It’s also generally appropriate to use the legacy service as the api gateway, and abstract the route through that to the new external service.

                                                Be mindful of added latencies for remote systems.

                                                1. 2

                                                  If people are paying you for a working API, I’d struggle to imagine a viable business case for rebuilding it differently and asking customers to change.

                                                  1. 3

                                                    It happens all the time. That’s one of the reasons that REST APIs are generally versioned.

                                                    1. 1

                                                      That still doesn’t solve the problem. The customers still need to transition to the newer version.

                                                      1. 3

                                                        I think our wires are crossed. I was using multiple versions of REST APIs as a counterpoint to the idea that there’s no “viable business case for rebuilding it differently and asking customers to change.”

                                                        That change may even be driven by customers asking for a more consistent/functional API.

                                              2. 3

                                                I’ve normally handled this my making all consumers use a service discovery system to find the initial endpoint, and then using that system to shift traffic “transparently” from the old system to the new one.

                                                This is admittedly a lot easier if your consumers are already using service discovery, otherwise you still have a transition to force. But at least it becomes a one-time cost rather than every migration.

                                            1. 4

                                              Did this at my previous job, and absolutely it worked well. It did cause some incidents, but was the best test to verify that our replacement could handle the load and all the weird quirks that the old system implemented. We did still have political problems betting certain consumers to buy-in, but I’m happy to say that we did indeed sunset the old system, and avoid having to implement features twice.

                                              1. 4

                                                Lots of good things were originally unintended or semi-intended results of technical limitations. The /usr split is still a good idea today even if those technical limitations no longer exist. It’s not a matter of people not understanding history, or of people not realising the origins of things, but that things outgrow their history.

                                                Rob’s email is, in my opinion, quite condescending. Everyone else is just ignorantly cargo-culting their filesystem hierarchy. Or perhaps not? Perhaps people kept the split because it was useful? That seems a bit more likely to me.

                                                1. 19

                                                  I’m not sure it is still useful.
                                                  In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.

                                                  1. 2

                                                    I’m not sure it is still useful.

                                                    I think there’s a meaningful distinction there, but it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.

                                                    In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.

                                                    I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin.

                                                    1. 14

                                                      That wasn’t the argument though. There was a good reason for the split (they filled up their hard drive). But that became a non-issue as hardware quickly advanced. Unless you were privy to these details in the development history of this OS, of course you would copy this filesystem hierarchy in your unix clone. Cargo culting doesn’t make you an idiot, especially when you lack design rationale documentation and source code.

                                                      1. 2

                                                        … it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.

                                                        Ah. Gotcha. That seems like a much more nuanced position, and I would tend to agree with that.

                                                        I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin

                                                        I’m not sure either. My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.
                                                        Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.

                                                        1. 3

                                                          My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.

                                                          That was a point further into the discussion. I can’t find the archived devwiki entry for usrmerge, but I pulled up the important parts from Allan.

                                                          Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.

                                                          Seems like we did contemplate /kernel and /linker at one point in the discussion.

                                                          What convinced me of putting all this in /usr rather than on / is that I can have a separate /usr partition that is mounted read only (unless I want to do an update). If everything from /usr gets moved to the root (a.k.a hurd style) this would require many partitions. (There is apparently also benefits in allowing /usr to be shared across multiple systems, but I do not care about such a setup and I am really not sure this would work at all with Arch.)

                                                          https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022629.html

                                                          Evidently, we also had an request to symlink /bin/awk to /usr/bin/awk for distro compatability.

                                                          This actually will result in more cross-distro compatibility as there will not longer be differences about where files are located. To pick an example, /bin/awk will exist and /usr/bin/awk will exist, so either hardcoded path will work. Note this currently happens for our gawk package with symlinks, but only after a bug report asking for us to put both paths sat in our bug tracker for years…

                                                          https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022632.html

                                                          And bug; https://bugs.archlinux.org/task/17312

                                                    2. 18

                                                      Sorry, I can’t tell from your post - why is it still useful today? This is a serious question, I don’t recall it ever being useful to me, and I can’t think of a reason it’d be useful.

                                                      1. 2

                                                        My understanding is that on macOS, an OS upgrade can result in the contents of /bin being overwritten, while the /usr/local directory is left untouched. For that reason, the most popular package manager for macOS (Homebrew) installs packages to /usr/local.

                                                        1. 1

                                                          I think there are cases where people want / and /usr split, but I don’t know why. There are probably also arguments that the initramfs/initrd is enough of a separate system/layer for unusual setups. Don’t know.

                                                          1. 2

                                                            It’s nice having /usr mounted nodev, whereas I can’t have / mounted nodev for obvious reasons. However, if an OS implements their /dev via something like devfs in FreeBSD, this becomes a non-issue.

                                                            1. 2

                                                              Isn’t /dev an own mountpoint anyways?

                                                              1. 1

                                                                It is on FreeBSD, which is why I mentioned devfs, but idk what the situation is on Linux, Solaris and AIX these days off the top of my head. On OpenBSD it isn’t.

                                                                1. 2

                                                                  Linux has devtmpfs per kernel default.

                                                        2. 14

                                                          The complexity this introduced has far outweighed any perceived benefit.

                                                          1. 13

                                                            I dunno, hasn’t been useful to me in the last 20 years or so. Any problem that it solves has a better solution in 2020, and probably had a better solution in 1990.

                                                            1. 6

                                                              Perhaps people kept the split because it was useful? That seems a bit more likely to me.

                                                              Do you have a counter-example where the split is still useful?

                                                              1. 3

                                                                The BSDs do have the related /usr/local split which allows you to distinguish between the base system and ports/packages, which is useful since you may want to install different versions of things included in the base system (clang and OpenSSL for example). This is not really applicable to Linux of course, since there is no ‘base system’ to make distinct from installed software.

                                                                1. 3

                                                                  Doesn’t Linux have the same /usr/local split? It’s mentioned in the article.

                                                                  1. 5

                                                                    I tend to rush for /opt/my-own-prefix-here (or per-package), myself, mainly to make it clear what it is, and avoid risk of clobbering anything else in /usr/local (like if it’s a BSD). It’s also in the FHS, so pedants can’t tell you you’re doing it wrong.

                                                                    1. 4

                                                                      It does - this is generally used for installing software outside the remit of the package manager (global npm packages, for example), and it’s designated so by the FHS which most distributions follow (as other users have noted in this thread), but it’s less prominent since most users on Linux install very little software not managed by the package manager. It’s definitely a lot more integral in BSD-land.

                                                                      1. 3

                                                                        […] since most users on Linux install very little software not managed by the package manager

                                                                        The Linux users around me still do heaps of ./configure && make install; but, I see your point when contrasted against the rise of PPAs, Docker and nodenv/rbenv/pyenv/…

                                                                        1. 3

                                                                          Yeah, I do tons of configure make install stuff, sometimes of things that are also in the distro - and this split of /usr/local is sometimes useful because it means if I attempt a system update my custom stuff isn’t necessarily blasted.

                                                                          But the split between /bin and /usr/bin is meh.

                                                                    2. 1

                                                                      That sounds sensible. Seems like there could be a command that tells you the difference. Then, a versioning scheme that handles the rest. For example, OpenVMS had file versioning.

                                                                1. 4

                                                                  I really enjoyed the article! However, it looks like 2 images in the errors section (errors2.png and errors3.png) are 404’ing

                                                                  1. 2

                                                                    Thanks for the catch, fixed!

                                                                  1. 1

                                                                    I feel like a good vcs tool is now as scared as containerization and orchestration tool in the past. Its waiting for some good open source initiative thats is viable and scalable to become a “trend”. Then it stops become an enterprises closed source tool and get adopted widely

                                                                    1. 1

                                                                      Scared?

                                                                      1. 1

                                                                        I think they mean “sacred”

                                                                        1. 1

                                                                          maybe a typo for “scarce”

                                                                      1. 17

                                                                        Maybe I’m an outlier, but I’m incredibly tired of this garbage of trying to transplant exploitative for-profit concepts into Linux distributions?

                                                                        I’m using free software because I neither want this, nor need this.

                                                                        (That Ubuntu is slowly trying to force people into using Snap is the straw that breaks the camel’s back. I’d rather move to another distribution than deal with this shit.)

                                                                        1. 6

                                                                          I don’t understand what you mean by “exploitative for-profit concepts”. Could you expand on that?

                                                                          1. 7

                                                                            If you look at the various “app stores”, they are pretty much full with this dystopian hellhole of apps that want full control over the device and your personal data (either immediately or at some later point in time when you put effort into using it) to sell your information and display ads.

                                                                            The app store vendors’ half-hearted attempt of cutting back on this surveillance capitalism with these feel-good permission info screens doesn’t solve the primary issue:

                                                                            These are untrustworthy applications built by untrustworthy companies – I wouldn’t want to run them even if I had the world’s best sandbox running on my device.

                                                                            Packages in Linux distributions simply don’t have the issue: The expectation is that the developers are trustworthy and have their users’ interests in mind.

                                                                            Sandboxing is simply not an useful tool to encourage ethical behavior – it makes things users want either hard or impossible, while not being powerful enough to actually curtail harmful behavior if it existed.

                                                                            1. 3

                                                                              Packages in Linux distributions simply don’t have the issue: The expectation is that the developers are trustworthy and have their users’ interests in mind.

                                                                              And even in some cases where developers do send analytics data or put ads in applications (e.g. as Zeal used to do), some distributions patch the upstream code to remove this code.

                                                                              However, I still think there is a place for application bundles. Some distributions just move slowly or people are bound to LTS versions. Desktop applications tend to get stale in such a setup and it’s nice to be able to install applications out-of-band. macOS had a long history of application bundles before code signing, sandboxing, and app stores. However, when an application is not provided by a distribution, you probably want signed bundles (just as distributions GPG-sign package metadata), and possibly sandboxing (though I don’t think sandboxing is good enough yet to allow arbitrary untrusted applications).

                                                                              1. 4

                                                                                I think the approach of projects offering their own repositories with current versions of their software is a pretty good idea.

                                                                                1. 3

                                                                                  It is a lot of repeated work for each distribution.

                                                                                  1. 1

                                                                                    But if projects offer their own repositories, aren’t we back to one of the problems you cited with app stores?

                                                                                    These are untrustworthy applications built by untrustworthy companies – I wouldn’t want to run them even if I had the world’s best sandbox running on my device.

                                                                                    Packages in Linux distributions simply don’t have the issue: The expectation is that the developers are trustworthy and have their users’ interests in mind.

                                                                                    From what I understand, there are a few problems with using distro repos for such software:

                                                                                    1. They’re necessarily slower than the developer’s release cadence, and the developer of the upstream software has limited control over providing new versions to package maintainers
                                                                                    2. They’re beholden to the library versions provided by other package maintainers. If a dependency of your software isn’t up to date, that means that either the package maintainer, or you, need to adapt to the API of that older version for just that platform.
                                                                                    3. Maintainer-provided/applied patches make your software on that platform deviate from upstream or in the worst but rarest case, introduce new bugs specific to that platform.

                                                                                    I think providing software bundles for some applications do address these problems (although introduce other problems as well), and are useful in the case of fast-moving desktop applications. I don’t think they obviate package repos at all, nor should distros move towards obviating package repos. They solve different problems.

                                                                                2. 1

                                                                                  Is snap/flatpak’s goal really to make running untrusted software less dangerous? I was under the impression that their only aim was to make distribution easier and to mitigate security issues the software itself may have.

                                                                              2. 4

                                                                                I’m using free software because I neither want this, nor need this.

                                                                                This is why we have community-driven distributions. In Debian, for example, maintainers are required to vet packages and also disable any minor privacy invading function.

                                                                                I also add external sandboxing to daemons as part of the packaging. Sadly some upstream developers strongly reject the idea of sandboxing.

                                                                                1. 2

                                                                                  Yeah, I’ll probably end up running Debian in the mid-term; considering that synaptic is a hard requirement for me.

                                                                                  I probably need to patch the font rendering though, which is pretty annoying that Ubuntu still seems the only distribution that gets that right out of the box.

                                                                                  In the long-term I’m looking for an operating system that is a meaningful upgrade to the current repository system (like distri) that has a Rust-only userland (which enables me to actually understand and modify the code I’m running, unlike C/C++).

                                                                                  1. 0

                                                                                    I also add external sandboxing to daemons as part of the packaging. Sadly some upstream developers strongly reject the idea of sandboxing.

                                                                                    I understand this impulse, personally. It has nothing to do with the intent of the packager. It’s all about the odd fallout that the sandbox system in question can lead to. Chasing bugs from a user running your software in some sandbox you’ve never tried would really be aggravating. In part because the packager isn’t who the users complain to. It’s the author.

                                                                                  2. 2

                                                                                    if you are interested in packing stuff and not control, I think appimage is a better system anyway. Basically just an elf file with a squashfs appended.

                                                                                    1. 8

                                                                                      I think normal package repositories have been perfectly fine for the last 2 decades and are mostly fine today.

                                                                                      For the future, I’m looking forward to something like distri.

                                                                                      1. 2

                                                                                        If I’m developing software I don’t want to have to generate packages for N different distributions.

                                                                                        Distributing my software in some way that doesn’t depend on the OS is appealing (though I prefer the Go approach of just statically compiling).

                                                                                        1. 3

                                                                                          If I’m developing software I don’t want to have to generate packages for N different distributions.

                                                                                          So don’t. That’s the distributions’ problem, not yours.

                                                                                          1. 3

                                                                                            Distros don’t accept your software unless it’s already popular. Most Linux users prefer to install software from the distribution, so how does it get popular?

                                                                                            The simple answers are:

                                                                                            • Be part of an existing social connection so that demand can build up even before your software becomes widely-used, or, better yet, become a hard dependency of an already popular package. This is a “good old boy network,” and we should not stand for that.

                                                                                            • Package, with distro packages yourself. This requires a bunch of work to support even the popular ones, and leaves the unpopular distros out in the cold.

                                                                                            • Package, as a static binary or a hand-rolled installer. This has zero support for the security mitigations that motivated dynamic linking in the first place. This, along with source packages annotated with dependency metadata in a standard format, seems to be the go-to for new languages though.

                                                                                            • Distribute source code. The biggest downside of this approach is that there doesn’t exist a standard way to define the dependencies of a source tarball, so installation is a pain.

                                                                                            1. 1

                                                                                              That last one depends greatly on your development environment. requirements.txt works reasonably well for python. Go has a standard way to define its dependencies too.

                                                                                              But there is a middle ground answer to your question… while distros don’t accept and distribute your software ‘till it’s popular, you can add PPAs, COPRs, or bsd ports on your own. These let you define dependencies in the ways that make sense for what you are targeting and don’t require official approval.

                                                                                              1. 1

                                                                                                But there is a middle ground answer to your question… while distros don’t accept and distribute your software ‘till it’s popular, you can add PPAs, COPRs, or bsd ports on your own.

                                                                                                I literally listed that as option 2.

                                                                                                1. 1

                                                                                                  Sorry. I didn’t read it that way. Obviously a mis-read.

                                                                                              2. 1

                                                                                                Distros don’t accept your software unless it’s already popular.

                                                                                                None of this is generally true. Most distributions will accept any package for inclusion as long as it fits within their project guidelines and–much more importantly–someone has committed to maintaining the package going forward.

                                                                                                This has nothing to do with popularity, it has to do with whether or not any volunteer OS package maintainer feels it’s worth their time to support. If you can’t get an existing maintainer to take an interest in your software, then you might have to step up to be a maintainer yourself. If that sounds like too much work to be worth it, then your software is probably not as valuable as you think it is.

                                                                                              3. 2

                                                                                                But I may also have the competing desire to make my software easy to use and install.

                                                                                                Compiling my code statically solves this problem without me having to faff around with a bunch of different distros.

                                                                                                Now I mostly use Julia and there’s a nice project (Artifacts system, Yggdrasil) where more and more non-julia dependencies are being (mostly) statically compiled and made available as packages that the Julia package manager can deal with. In my opinion, this is a much better way of distributing software than the traditional linux package managers.

                                                                                                Nix and distri are similar to the Julia way of doing things.

                                                                                                Though the Julia system is also cross platform to Windows and BSD in many cases, where of course Nix and distri are not.

                                                                                                1. 3

                                                                                                  Compiling statically just replaces one problem with another: now there’s bloat and extra bundled libraries that won’t get security patches when their system copies get patched by the distribution.

                                                                                                  Honestly, just be content to relinquish control. Let distributions do whatever they want. And if they don’t want to package your software, maybe your software just isn’t that important, or maybe it’s not important to them to have the latest release packaged every day.

                                                                                                  1. 2

                                                                                                    If some other schmuck wants to distribute my software some other way, then more power to them, but it’s not unreasonable to want to be able to easily run my software on a bunch of different linuxes without faffing around.

                                                                                                    And I’m not convinced by the arguments about bloat and security (and I am not the only one: https://ro-che.info/articles/2016-09-09-static-binaries-scientific-computing)

                                                                                                  2. 2

                                                                                                    Yggdrasil

                                                                                                    You briefly made me think “Holy shit, that’s still aound?!

                                                                                                    The Yggdrasil I remember: https://en.wikipedia.org/wiki/Yggdrasil_Linux/GNU/X

                                                                                                    1. 2

                                                                                                      I hope (and kinda expect) that this new Yggdrasil will last a bit longer. The reproducible research effort aims for decades of reproducibility ;)

                                                                                                  3. 1

                                                                                                    It is when distribution quality issues are then sent to you, or they don’t update in a timely manner…

                                                                                                    1. 2

                                                                                                      My attitude when this happens with Octave is still the same: sorry, not my problem. Please talk to your distro managers instead.

                                                                                                      We have pretty good packagers, though.

                                                                                                  4. 2

                                                                                                    If I’m developing software I don’t want to have to generate packages for N different distributions.

                                                                                                    Isn’t that pretty much a solved issue with OBS and similar services? And if people want to use your software, they will start packaging it either way, so there isn’t that much effort required from our side.

                                                                                                    I prefer the Go approach of just statically compiling

                                                                                                    My approach is too avoid software written in Go (as well as PHP and Node.js) in general due to software quality/mindset issues. :-)

                                                                                                    1. 2

                                                                                                      OBS

                                                                                                      I’m not familiar with this, but it looks like I would still have to find out the name of all my dependencies on each distro and so on: https://openbuildservice.org/help/manuals/obs-user-guide/cha.obs.package_formats.html

                                                                                                      I’m not interested in doing that. But if I don’t do that, my software is typically harder to install on $linux than on windows.

                                                                                                      I just want to ship a binary and I don’t want to get any grief about Debian’s ancient libc or whatever. I want to give you an x86 binary and have it run on any linux with a compatible kernel ABI. It’s very annoying wanting to use some decade old software and having to fight to get things to run.

                                                                                                      avoid Go

                                                                                                      You do you. I also prefer to statically compile C and Rust code.

                                                                                                      1. 1

                                                                                                        And if people want to use your software, they will start packaging it either way

                                                                                                        This seems to be based on the assumption that your users are all programmers.

                                                                                                        1. 1

                                                                                                          My approach is too avoid software written in Go (as well as PHP and Node.js) in general due to software quality/mindset issues. :-)

                                                                                                          You’re missing out on quite a lot of high-quality software due to those mindset issues.

                                                                                                          1. 1

                                                                                                            Can’t think of any.

                                                                                                1. 1

                                                                                                  Interestingly, after the OpenBSD port migrates to making syscalls through libc, there will be more supported platforms calling syscalls through libc than through golang’s custom assembly stubs (AIX, Solaris, macOS and Windows already call through libc).

                                                                                                  There’s also another cost not covered in the article. Golang has its own calling conventions that it uses internally, so to make calls into C, you have to do some argument juggling in their assembly before actually making the function call.

                                                                                                  In addition, to create the machine threads, you also now have to call into the system’s threading library rather than calling the primitives directly. There are some interesting questions about stack size for the machine threads, which I haven’t found answers to, since each platform that implements this seems to pull a magic number for how large to make the stack for each machine thread.

                                                                                                  1. 30

                                                                                                    I don’t see nor agree with the assertion that blamelessness has anything to do with shallow analysis, not theoretically and not in my experience in SRE.

                                                                                                    The best post-mortems do talk about culture problems, technical debt, or anything else that contributes to the problem that the post-mortem documents.

                                                                                                    For example:

                                                                                                    13:10 Engineer on Backend team introduces bug. Since this is a component in a newer programming language, nobody else on the team was able to provide meaningful code review.
                                                                                                    13:20 Engineer bypassed failing CI run since CI has been failing on this component for months, without any engineers investigating the failure.
                                                                                                    13:40 Canary deployed to prod
                                                                                                    13:41 Canary promoted, since this component lacks meaningful metrics, since metrics have been deprioritized in favor of meeting feature launch next quarter
                                                                                                    

                                                                                                    In my experience, I’ve seen timelines like that, which specifically raise the types of systemic issues that need to be discussed at the post-mortem.

                                                                                                    As far as naming folks that are consistently causing problems? I also think that’s a horrible practice. Post-mortems for major issues at a large company are often highly visible. Someone, even someone who has been careless, or bad at their job, shouldn’t be publicly shamed. Their manager is responsible for ensuring that they’re meeting the expectations for their position, and should take action separate from the post-mortem process. You don’t need people from other teams ostracizing them, or refusing to work with them, or bringing up firing them during a post-mortem.

                                                                                                    1. 4

                                                                                                      As far as naming folks that are consistently causing problems? I also think that’s a horrible practice. Post-mortems for major issues at a large company are often highly visible. Someone, even someone who has been careless, or bad at their job, shouldn’t be publicly shamed. Their manager is responsible for ensuring that they’re meeting the expectations for their position, and should take action separate from the post-mortem process. You don’t need people from other teams ostracizing them, or refusing to work with them, or bringing up firing them during a post-mortem.

                                                                                                      Totally agree. The post mortem should be a tool to enable the social organism of the team identify and action systemic problems. Naming individuals has no place in that scheme in my book.

                                                                                                      If someone is under-performing, that is a conversation to be had between the individual contributor, their manager, and maybe those who are in a position to help measure that person’s performance.

                                                                                                    1. 15

                                                                                                      Ultimately, it seems like the lesson here boils down to: “Hard and fast rules about data and programming are bad. Fully understand your specific domain, and make the best decision for your requirements,” but with an example of time. I think that’s a great lesson, honestly, and one that more developers need to learn, rather than parroting “rules” from blog articles.

                                                                                                      One thing though, is that I don’t see as much value in OffsetDateTime, even for the use-cases provided. Without the TimeZone, if you have a sudden, upcoming tzdata change, the date your users see will be invalid.

                                                                                                      1. 1

                                                                                                        I didn’t know about functools.singledispatch before! This is a really cool application of python’s type annotations!

                                                                                                        1. 22

                                                                                                          I don’t see why I should try my ISP more than Cloudflare. My ISP is using DNS domain blocking for country-wide censorship.

                                                                                                          My concern with Cloudflare is centralization. But fortunately, DoH has started gaining some traction and more providers start to offer it out there.

                                                                                                          1. 13

                                                                                                            If you’re American, there’s probably no reason to trust your ISP more than cloudflare. However, most people are not American. For us, it’s kind of shitty if browsers start sending every query to an American megacorp which follows American privacy laws and is in a country where we have no legal power.

                                                                                                            I can expect my ISP to follow Norwegian privacy laws, which I’d bet are way better than American. If my ISP does something illegal according to our privacy laws, I have way more power to stop them than if an American company does the same.

                                                                                                            I know this will all be user configurable, but if it gets enabled by default, and it defaults to cloudflare, most people won’t change that. Most people won’t know DoH is a thing, much less the nuance regarding why they may or may not want to change the setting.

                                                                                                            1. 7

                                                                                                              Is Cloudflare going to be the default for the rest of the world? Mozilla is only rolling it out to the US, as this article even mentions. I haven’t seen any announcement what the plan is for the rest of the world, if there is any.

                                                                                                              1. 1

                                                                                                                If Mozilla is only rolling it out to the US, and never ends up rolling it out to the rest of the world, then yeah, my points don’t matter. However, I haven’t heard any statement that they won’t ever make DoH with cloudflare the default for the rest of the world, just that they’re not doing it yet.

                                                                                                                1. 2

                                                                                                                  They have talked about having different regional defaults, and including more preset options in the dropdown for configuring DoH. This hasn’t happened yet, though.

                                                                                                              2. 10

                                                                                                                I’m not an American either. In my country (Greece) authorities can order ISPs to block certain domains on a DNS level, without any due process. And ISPs comply. DoH is the most user-friendly way for many people to access these websites.

                                                                                                                1. 2

                                                                                                                  If cloudflare operates in that country, they would still have to comply with local laws, no? Just like all other services have to.

                                                                                                                  1. 1

                                                                                                                    Nope. Cloudflare is not consider an ISP for that country, so no need to comply. Same applies to any other public DNS service (eg. Google).

                                                                                                                  2. 1

                                                                                                                    If the blocking is only on a dns-level, it’s not much of a blocking method. It should redirect all traffic based on certain IP-numbers as well, which kind of defeats the purpose of the whole DoH endeavor.

                                                                                                                    1. 2

                                                                                                                      It’s a silly blocking method indeed, but it’s effective for the majority of the users who don’t know how to switch DNS settings on their systems. IP blocking is also ineffective, because IPs often change ownership.

                                                                                                              1. 4

                                                                                                                Perhaps I’m missing something obvious, but… why can’t we have DoH without Cloudflare? What’s to stop me from running my own DoH server?

                                                                                                                1. 5

                                                                                                                  Absolutely nothing. Cloudflare is a red herring.

                                                                                                                  1. 1

                                                                                                                    It’s true that the DoH configuration interface in the Nightly and Beta releases of Firefox already presents the option to use another provider.

                                                                                                                    But it’s … nearly disingenuous to consider this issue in the abstract…

                                                                                                                    May I rephrase the question?

                                                                                                                    Why won’t we have DoH without Cloudflare?

                                                                                                                    • Because only one in 10 thousand Firefox users ever change a single configuration setting, much less such an esoteric one.
                                                                                                                    • Because it’s not in Cloudflare’s interest, and they are powerful.
                                                                                                                    • Because it’s not in the interest of that subset of Mozillians who intend to run it more like a corporation.
                                                                                                                    • I am not aware of Firefox-compatible DoH providers. (Not saying there aren’t any, but.. see first bullet!)

                                                                                                                    Now for this one:

                                                                                                                    What’s to stop me from running my own DoH server?

                                                                                                                    The D in DNS stands for Distributed. You may operate a DNS server. By design, your DNS server will respond to queries when it knows the answer, it will speak to peers DNS servers to learn new answers, and it will speak to superior DNS servers when peers don’t know the answer. (Or something like that! I’m no DNS expert.)

                                                                                                                    There are many gratis, libre, proprietary, and commercial DNS server software options available for every OS and hardware combination that could conceivably become connected to a network.

                                                                                                                    But Firefox DoH is operated by Cloudflare. Full stop. It was designed to be operated by a single large entity from day one.

                                                                                                                    The authors’ presentation of their arguments may be flawed, but I believe that their position is correct in principle: A Firefox default-on DoH via Cloudflare will be bad for the Internet.

                                                                                                                    1. 5

                                                                                                                      The D in DNS stands for Distributed.

                                                                                                                      No, it stands for Domain. Domain Name Service.

                                                                                                                      I agree with you, but please don’t promote my side of the argument using bad information.

                                                                                                                      1. 2

                                                                                                                        Hm, that’s weird. Uh… Hm. ?:)

                                                                                                                      2. 3

                                                                                                                        I am not aware of Firefox-compatible DoH providers. (Not saying there aren’t any, but.. see first bullet!)

                                                                                                                        There are several: https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-available-servers

                                                                                                                        The D in DNS stands for Distributed. You may operate a DNS server. By design, your DNS server will respond to queries when it knows the answer, it will speak to peers DNS servers to learn new answers, and it will speak to superior DNS servers when peers don’t know the answer. (Or something like that! I’m no DNS expert.)

                                                                                                                        You’re understanding of DNS seems flawed here (aside from it standing for Domain Name System instead of Distributed… Name System?). I’d do more reading. DoH itself isn’t evil, and I think that’s one of the unfortunate side-effects of this whole controversy – the technology being blamed for the defaults Firefox is setting.

                                                                                                                        You can have a resolver in your network that speaks plain DNS do DoH to the outside world if you wanted. Due to the distributed nature of DNS, DoH servers themselves can be recursive resolvers, and ask other DNS servers for answers that it doesn’t know about.

                                                                                                                        The distributed nature of DNS doesn’t end with DoH. The problem (as I see it, at least) is that it centralizes client DNS comms to Cloudflare. If Firefox shipped a list of DoH servers that weren’t connected to Cloudflare, and randomly distributed customers across them, then I think this would be a very different conversation.

                                                                                                                        But Firefox DoH is operated by Cloudflare. Full stop. It was designed to be operated by a single large entity from day one.

                                                                                                                        I even agree with not wanting the default to be cloudflare, but this whole answer seems pretty ill-informed.

                                                                                                                        1. 3

                                                                                                                          But Firefox DoH is operated by Cloudflare. Full stop. It was designed to be operated by a single large entity from day one.

                                                                                                                          That’s not true. You can configure Firefox to use any DoH server out there. It “just” uses Cloudflare’s DoH servers by default. You can change this in the settings and you can modify the behaviour even more by going the about:config route.

                                                                                                                          And you can run your own DoH server. I’m doing this right now. You still have the freedom to use your own services for this, just like you did with plain DNS. For some odd reason the whole DoH discussion has become tied up with the myth that it is somehow forcibly connected to Cloudflare. It isn’t. It’s just a bad default. We should discuss the bad default and DoH seperately, but this has become a tangled up and overly emotional mess for most people.

                                                                                                                          1. 2

                                                                                                                            I am not aware of Firefox-compatible DoH providers.

                                                                                                                            I believe Google is also running a DoH service, presumably for the future benefit of Chrome.

                                                                                                                            Not gonna configure my Firefox to point there, though.