1. 1

    I just hope that progressive web apps end electron at least so I don’t have worry about 5 different–and the unmergable memory pages of, chromium versions.

    In terms of “native feel” I don’t think such a thing will ever be attainable as long as every platform has a special snowflake toolkit or a plethora of alternatives.

    1. 12

      I never expect to see something like a billion laughs attack in a CSS engine. that’s actually quite creative but I can see how it could be overlooked. I didn’t even know about css variables until lstr last year or that calc()’s on different types cause deferred evaluation.

      but this is ultimately why I use things like https://extensions.gnome.org/extension/120/system-monitor/ so I can see if a program is misbehaving and about to OOM my computer. a few weeks ago at work I caught evolution memory leaking fast and was able to sigabrt it before it OOM’d my work computer.

      1. 41

        Wow, that’s pretty terrible.

        On the other hand, I can’t help but to feel sorry about Dominic, we all make mistakes, this public shaming is pretty violent.

        I guess we should sometimes take some time off to read the license before using a library:

        THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

        (F)OSS is not a consumer good.

        1. 11

          I agree that shaming people is toxic and unproductive. No one wants to be shamed and no one is perfect.

          But I see another dimension to the negative responses Dominic has received. Non-hierarchical, self-governing communities like open source software are organized by social norms. Social norms work through peer pressure - community members conform to the norms of the community not because they are compelled to by law but because it would cost them standing in the community not to. This isn’t inherently good. Some norms are toxic and self-policing via peer pressure can lead to shaming. What I see in some of the critical comments addressed to Dominic is an attempt to establish a clear social norm about what to do when you are ready to abandon a package. The norm is desirable because it increases the general level of trust. Even if the landscape is generally untrustworthy, you can have some confidence that people aren’t handing their packages off to strangers because it’s the norm not to do that. The desire for some norm here, whatever it is in the end, is reasonable.

          Ending the discussion with “don’t worry about it Dominic, everyone makes mistakes, and anyways you’re not liable for it” signals to everyone that they’re not responsible for the consequences of what they do. In a strictly legal sense, that might be true. Even then, I’m skeptical that the warranty clause would cover negligence in the distribution of the software rather than the software itself. But in either case, don’t we want a community where people do feel responsible for the actions they take and are open to receiving feedback when an action they’ve taken has a bad result? This dialogue can occur without shaming, without targeting anyone personally, and can be part of the same give-and-take process that produces the software itself.

          1. 6

            Blaming people in any security issue is toxic, no matter what happens. In any organization with paid people where you should expect better, the most important rule of a post-mortem is to remain blameless. It doesn’t get anyone anywhere and doesn’t get remotely close to actual root cause. Instead of asking about why Dominic gave away a critical package, people should be asking why some random maintainer were able to give away a critical package.

            Ending the discussion with “don’t worry about it Dominic, everyone makes mistakes, and anyways you’re not liable for it” signals to everyone that they’re not responsible for the consequences of what they do.

            By putting blame on Dominic, people are not taking responsibilities. The main issue is that many core libraries in the JavaScript ecosystems still depends on external, single-file, non-core, likely unmaintained library. People who should take responsabilities are the ones who chose to add a weak single point of failure by depending on event-stream.

            1. 2

              It depends what you mean by blame. If you mean assigning moral responsibility, especially as a pretext for shaming them, then I agree it’s toxic. I think I was clear that I agree this shouldn’t happen. But if blame means asserting a causal relationship between Dominic’s actions and this result, it’s hard to argue that there isn’t such a relationship. The attack was only possible because Dominic transferred the package. This doesn’t mean he’s a bad person or that he should be “in trouble” or that anything negative should happen to him as a consequence. A healthy social norm would be to avoid transferring packages to un-credentialed strangers when you’re ready to abandon the package because we’ve seen this opens an attack vector. Then what’s happened here is instructive and everyone benefits from the experience. And yes, ideally these dilemmas are prohibited by the system. Until that is the case, it helps to have norms around the best way to act.

              1. 1

                I understand you don’t condone the attacks and shaming going around. However I would argue that even if you agree that the blaming is toxic, that building some social norm around it is better than nothing, I believe that even hinting that it was somehow Dominic’s fault is a net negative.

                The attack was only possible because Dominic transferred the package.

                This is exactly what I’m condoning. By looking at individual and their action you scope the issue at that level. The attack was taking over a dependancy. It is possible to do so in so many way, especially for packages such as Dominic’s. This time it was a case of social engineering, next time it might as well be a case of credential hijacking, phishing or maintainer going rogue.

                A healthy social norm would be to avoid transferring packages to un-credentialed strangers when you’re ready to abandon the package because we’ve seen this opens an attack vector.

                I would say pushing this rethoric is actually unhealty and only lead people to rely on those social norm and use it as an excuse to disown their accountability. It would be much healthier to set expectation right and learn proper risk assessment about dependancies management.

                Then what’s happened here is instructive and everyone benefits from the experience. And yes, ideally these dilemmas are prohibited by the system. Until that is the case, it helps to have norms around the best way to act.

                The same issue have come up so many time in the past few years, especially in the NPM ecosystem, it should be well past the “learn from the experience” and I believe it’s time the relevant actors actually move toward a solution.

          2. 17

            I’ve done a similar thing before. After leaving the Elm community, I offered to transfer most of my repos over to the elm-community organisation. They accepted the most popular ones, but not elm-ast (and maybe one or two others). A few months later I received an e-mail from @wende asking if he could take over so I took a look at his profile and stuff he’s done in the past and happily gave him commit access thinking users would continue getting updates and improvements without any hassle. Now, @wende turns out to be a great guy and I’m pretty sure he hasn’t backdoored anyone using elm-ast, but I find it hilarious that people somehow think that maintainers should be responsible for vetting who they hand over control of their projects to or that they could even do a good job of it OR that it would even make any sort of a difference. Instead of trusting one random dude on the internet (me) you’re now trusting another.

            Don’t implicitly trust random people on the internet and run their code. Vet the code you run and keep your dependency tree small.

            1. 25

              Vet the code you run

              Or trust well-known, security-oriented distributions.

              keep your dependency tree small

              Yes, and stay away from environment, frameworks, languages that force dependency fragmentation on you.

              1. 4

                Or trust well-known, security-oriented distributions.

                That too! :D

                1. 3

                  and stay away from […] frameworks

                  I wouldn’t say that as absolutely for the web. I suspect that things would likely go a lot more haywire if people started handling raw HTTP in Python or Ruby or what have you. There’s a lot of stuff going on under the hood such as content security policies, CSRF protection and the like. If you’re not actively, consciously aware of all of that, a web framework will probably still end up providing a net security benefit.

                  1. 5

                    Please don’t quote words without context:

                    […] that force dependency fragmentation on you

                    Frameworks and libraries with few dependencies and a good security track record are not the problem. (If anything, they are beneficial)

                    1. 2

                      I interpreted “Yes, and stay away from environment, frameworks, languages that force dependency fragmentation on you.” as (my misunderstandings in brackets) “Yes, and stay away from [(a) integrated development] environments, [(b)] frameworks, [(c)] languages that force dependency fragmentation on you.” with a and b being separate from the “that” in c.

                      I apologize for the misunderstanding caused.

                  2. 2

                    Isn’t it the case that reputable, security-focused distributions acquire such status and the continuity thereof by performing extensive vetting of maintainers?

                    The responsible alternative being abandoning the project and letting the community fork it if they want to.

                    1. 1

                      Or trust well-known, security-oriented distributions.

                      Then how do You deal with things like that: “The reason the login form is delivered as web content is to increase development speed and agility” ?

                      1. 2

                        As a distribution? Open a bug upstream, offer a patch, and sometimes patch the packaged version.

                        1. 1

                          That’s a good idea in general but sometimes the bug is introduced downstream.

                  3. 9

                    Most proprietary software also comes with pretty much the same warranty disclaimer. For example, see section 7c of the macOS EULA:

                    https://images.apple.com/legal/sla/docs/macosx107.pdf

                    I mean, have we held accountable Apple or Google or Microsoft or Facebook in any substantial ways for their security flaws?

                    1. 4

                      In many other products accountability is enforced by law and it overrides any EULA. And that is tied to profit in the broad sense: sales or having access to valuable customer data & so on.

                      Software companies got away with zero responsibility and this only encourages bad software.

                      1. 1

                        And how have we enforced that by law for those companies, regardless of what those EULAs have said? When macOS allowed anyone to log in as root, what were the legal consequences it faced?

                        1. 3

                          other products

                          e.g. selling cars without safety belts, electrical appliances without grounding…

                    2. 2

                      It is a security disaster given how easy it is for js stuff to hijack cookies and sessions.

                      1. 1

                        It really isn’t if a well thought out CORS policy is defined.

                    1. 27

                      I think people talking about inspecting the source before installing dependencies are being unreasonable to some degree.

                      1. The malicious code was present only in the minified version of the code. I suppose the red flag that tipped the reporter was the lack of history/popularity of the repository in question, but it doesn’t have to be like that
                      2. It can be released to npm in a way that’s not evident to casually browsing the github repo
                      3. There isn’t even any guarantee that the code on npm matches what’s on github at all

                      Meaning the ways to be safe are:

                      1. Hand-inspect the code in your node_modules directory (including — or especially— those that may be minified); or
                      2. Don’t use npm at all.

                      I don’t see these people (nor myself) doing either. From which it follows:

                      Any company desiring to buy into the so-called “modern” front end development (be it for productivity, performance or hiring purposes) does so by making itself vulnerable to attacks such as this.

                      I don’t know if that’s a reasonable price to pay to use, say, React, but it sure isn’t reasonable to me to pay that to use Node (versus, say, Golang, which can reasonably be used to build the same kinds of apps using little more than the standard library).

                      1. 21

                        The malicious code was present only in the minified version of the code. I suppose the red flag that tipped the reporter was the lack of history/popularity of the repository in question, but it doesn’t have to be like that

                        One more reason for reproducible builds… minified JS should be treated like compiled code and automated mechanisms should check if it matches the unminified version…

                        1. 6

                          This, a thousand times this. I can’t comprehend the reasoning that goes into committing derived code into source control. It’s a pain to remember to update it every time you commit, it’s hard to verify the code matches the original source and just pollutes the history. Diffing is mostly undoable too.

                          1. 3

                            I think the reasoning is to avoid build dependency. For some time, it was a usual practice to include Autoconf-derived configure script in release artifacts, so that users can avoid installing Autoconf.

                            1. 1

                              Yeah, that’s annoying too (and a lot of projects still do it even though it’s not really good practice), but at least configure scripts don’t tend to/need to change with every single code change like these minified files do.

                              1. 1

                                generated autoconf configure scripts are pretty easy to read, I can say there were times I preferred them over the m4 source.

                          2. 11

                            It would be really nice if the package repositories (npm/pypi/rubygems/etc) did something:

                            • try to automatically detect obfuscated code
                            • stop letting maintainers upload packages from their dev machines, make sure any compilation happens on a public CI environment from a known git tag (this would also encourage non-compiled packages, i.e. just direct snapshots of git tags)
                            • have some popularity threshold for packages beyond which manual review from a trusted group of reviewers is required for each new release
                            • (also why not require the git tags to be gpg signed for these popular packages)
                            • maybe rethink the whole package handover thing, maybe only allowing “deprecation in favor of [a fork]” (i.e. requiring every update to be manual) is good
                            1. 3

                              I wouldn’t even check the node_modules output either as package installation can execute arbitrary code (node-gyp, other node bindings producing code)

                              1. 4

                                I agree with you!

                                People seems to like to hit on npm, but I don’t see how the core issue is different than say Pypi, Cargo or Go (Other than the issues you raised). I personnaly take easy and simple dependancies management over C/C++ fragmented package management because most of my project are not security critical anyway or my threat model doesn’t include targeted code injection in my stack.

                                I find it annoying when people look at those issues and some fault is put on the maintainers. Maybe the issue is not that one of your application’s thousands of dependancies compromition, but the fact that your risk management for your wallet application relies on thousands of unvetted dependancies…

                                Meaning the ways to be safe are:

                                I guess a first start would be to gather a bunch of useful and common repositories and ensure they and all their dependancies are well vetted and signed by the maintainers for each release and prevent any new dependancies from being pulled in without proper review and ensuring those dependancies use the same process. Documenting and enforcing such process for a subset of widely used dependancies would allow to trust a few organization and avoid to code review any dependancies I pull in in my own project. I guess most distribution core repositories has similar process like Arch maintained packages vs AUR.

                                1. 8

                                  Pypi absolutely has the the same potential issues, though in practice I think the dependency trees for popular projects are way smaller than what you get in the node ecosystem. So you’re much less likely to be hit by a transitive vulnerability. To me this is one of the advantages of a fairly comprehensive standard library, and a relatively small number (compared to node, at least) of popular, high quality third-party libraries that get a lot of eyeballs.

                                  1. 11

                                    On top of that, a lot of Python code is deployed to production by system engineers. Often it’s vetted, built, tested and baked in by distributions - and the same is true for other non-web languages.

                                    javascript, on the other hand, is more often deployed by the upstream developer and thrown at web browsers straight away without any 3rd party review.

                                    1. 3

                                      Definitely! But that somehow happened to be this way. It would be nice to look at the social side as to why Python ended up this way while nothing prevented it from ending up like NPM. Maybe some key aspect of the tooling drive the trend one way or the other or it might be just the community (Python being much older and the tooling has seen a lot of changes over the years).

                                      I would be looking forward to a someone doing a graph analysis of a few package repositories across languages and find some way to rate them and put some risk on packages. How many and how deep does their dependancies go? How many of them are maintained by external maintainer? Sounds like I found myself a new week-end project…

                                      1. 12

                                        Python has a decent class library. Good libraries that have general use migrate back into that class library, in some fashion or another. Thus, third party libraries don’t have to have long dependency chains to do anything.

                                        What NPM forgot was that this was the fundamental idea that made package management useful. This stretches back to the early days of Java, at least, and I’m sure you can find other similar examples. By having a rich class library which already provides most of what you need, you’re simply going to layer on dependencies to adapt that framework to your specific business needs. Java, .NET, Ruby, Python- they all have that going for them. JavaScript simply does not. So half the Internet unwittingly depends on leftpad because a dependency of a dependency of a dependency needed it, and there wasn’t anything in the core library which could do it.

                                        1. 1

                                          Maybe some key aspect of the tooling drive the trend one way or the other or it might be just the community (Python being much older and the tooling has seen a lot of changes over the years).

                                          I think this is a big part of it — Python’s tooling is generally less capable than the Node ecosystem’s.

                                          To this day Pip doesn’t have a dependency resolver, so the result of installing a dependency tree with conflicts at the transitive dependency level isn’t an error, but an arbitrary version getting installed. You can only have a single version of a Python module installed, too, because they are global state. Contrast with how npm has historically (still does?) install multiple versions of a package, effectively vendoring each dependency’s dependency tree, transitively.

                                          Additionally, publishing Python packages has long been messy and fraught. Today there is decent documentation but before that you were forced to rely on rumors, hearsay, and Stack Overflow. Putting anything nontrivial on PyPI (e.g., a C extension module) is asking for a long tail of support requests as it fails to install in odd environments.

                                          I think the end result was a culture that values larger distributions to amortize packaging overhead. For example, the popular Django web framework long had a no-dependencies policy (if dependencies were required, they were vendored — e.g. simplejson before it entered the standard library).

                                          Regardless of the reasons for it, I think that this is healthier than the Node culture of tiny dependencies with single contributors. More goes into distributing software than just coding and testing — documentation, support, evangelism, and legal legwork are all important — but tiny libraries have such limited scope that they’ll never grow a social ecosystem which can persist in the long term (of course, even Django has trouble with that).

                                          1. 1

                                            You can only have a single version of a Python module installed, too, because they are global state.

                                            That’s actually a pretty good point I think. I have fought myself a few time against Pip due to conflicting versions. It does benefits library with fewer dependancies.

                                      2. 1

                                        While I’m not generally a fan of it, I think minimal version selection that’s planned for the future go package manager would make this attack spread much more slowly.

                                    1. 2

                                      This is why npm packages need some form of namespacing; something like username/package-name This would prevent package name squatting or situations like this where the dev hands over maintainership of the package to someone else.

                                      1. 1

                                        Using URLs directly would help with this too.

                                      1. 1

                                        I feel having /usr/local inside /usr is a bit weird (alternatives: /local or /opt). Dealing with many makefiles hardcoded to /usr/local would be a lot of effort, however.

                                        It’s the problem of file-tidying. On a home machine we can re-arrange our personal files as much as we want. On an HTTP server we have to resist any sort of tidying, no matter how trivial, because this breaks URLs.

                                        1. 1

                                          given tons of programs still just emit configurations and other content into personal $HOME/.$some_program, I’m not even sure of the former part of your last statement. at least with url’s you can configure 301 redirects.

                                        1. 10

                                          My only real complaint about getting rid of the /usr split is that /usr still exists as something other than a symlink to /. The name /usr is basically meaningless. If you aren’t going to support /usr on a separate partition, why bother having /usr at all? Just shallow your hierarchy:

                                          • /bin
                                          • /dev
                                          • /local
                                          • /share
                                          • and so forth

                                          Keep /usr as a symlink to / for compatibility. I’ve suggested this to folks a few times over the years and haven’t gotten much response. By symlinking /bin and friends, you keep /usr as a naming convention, even though as Landley’s post points out, it is one that has lost its relevance.

                                          1. 2

                                            Having a separate /usr leads to a system of layered configuration. You have the distributor-originating artifacts in /usr, you have your configs in /etc, and local databases/pods/containers/caches/spools/etc. in /var.

                                            Also see systemd-tmpfiles(8).

                                            Coincidentally something I recall from Haiku.

                                            1. 3

                                              From a 60ies/70ies UNIX perspective, I can understand this line of reasoning (limited disk space, more primitive file systems). But we can just put every application/configuration in its own flat filesystem namespace to avoid name clashes. It also makes it possible to have multiple versions of applications or configuration files available at the same time.

                                              (E.g. nix and guix follow this approach and to a lesser extend macOS application bundles.)

                                              1. 3

                                                The central problem with just refactoring the file system structure to something that makes more intuitive sense from a high-level view is that it makes life a living hell for package maintainers. They have to grind through the process of not just making sure binaries and libraries get where they should be, they also have to make sure that the software will actually work correctly with things moved around. Some software (especially stuff using autoconf) can deal with this pretty okay, but other stuff is more nightmarish. The closer your system is to “legacy,” the more likely $random_third-party_app is likely to more or less work out of the box.

                                            2. 1

                                              having circular symlinks can create some interesting tarballs, or make the linker recurse through it a few times.

                                            1. 8

                                              I’m not seeing how this is a backdoor, unless is isn’t possible to control access to this feature for users with different portal access roles.

                                              1. 4

                                                Instead of dealing with the weirdness of -e and -u, pipefail, you could just…. you know check the failure conditions yourself.

                                                for pipefail -> bash sets an array called PIPESTATUS with all the exit codes of foregrounded pipelines. for set -e -> many programs can return differing exit codes and to know if it actually failed will likely require actually checking the value of $? or just using if statements. for set -u -> even in the article posted, you’re still treating unset variables as zero length strings, the only difference now is you added a new expansion to make it not fail for dubious reasons.

                                                also see: http://mywiki.wooledge.org/BashFAQ/105 specifically:

                                                “These rules are extremely convoluted, and they still fail to catch even some remarkably simple cases. Even worse, the rules change from one Bash version to another, as Bash attempts to track the extremely slippery POSIX definition of this “feature”. When a SubShell is involved, it gets worse still – the behavior changes depending on whether Bash is invoked in POSIX mode. Another wiki has a page that covers this in more detail. Be sure to check the caveats.”

                                                set -e is an anti feature.

                                                1. 1

                                                  It’s fun to see the creative uses of the various WIN32 rules and APIs, around file and directory names, for malicious intent.

                                                  I’m surprised UAC would take into account path at all, or even more confusing, why dependent code is not put through this same verification, or at least some kind of labelling or code signing for dlls.

                                                  1. 4

                                                    A Turin turambar turún’ ambartanen. Another shell that isn’t shell, shells that aren’t shells aren’t worth using because shell’s value is it’s ubiquity. Still, interesting ideas.

                                                    This brought to you with no small apology to Tolkien.

                                                    1. 13

                                                      I’ve used the Fish shell daily for 3-4 years and find it very much worth using, even though it isn’t POSIX compatible. I think there’s great value in alternative shells, even if you’re limited in copy/pasting shell snippets.

                                                      1. 12

                                                        So it really depends on the nature of your work. If you’re an individual contributor, NEVER have to do devops type work or actually operate a production service, you can absolutely roll this way and enjoy your highly customized awesomely powerful alternative shell experience.

                                                        However, if you’re like me, and work in environments where being able to execute standardized runbooks is absolutely critical to getting the job done, running anything but bash is buying yourself a fairly steady diet of thankless, grinding, and ultimately pointless pain.

                                                        I’ve thought about running an alternative shell at home on my systems that are totally unconnected with work, but the cognitive dissonance of using anything other than bash keeps me from going that way even though I’d love to be using Xonsh by the amazing Anthony Scopatz :)

                                                        1. 5

                                                          I’d definitely say so – I’d probably use something else if I were an IC – and ICs should! ICs should be in the habit of trying lots of things, even stuff they don’t necessarily like.

                                                          I’m a big proponent of Design for Manufacturing, an idea I borrow from the widgety world of making actual things. The idea, as defined by an MFE I know, is that one should build things such that: “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.”

                                                          For a delivery-ops guy like me, working in a tightly regulated, safety-critical world of Healthcare, having reproducible, reliable architecture, that’s cheap to replace and repair is critical. Adding a new shell doesn’t move in that needle towards reproducibility, so it’s value has to come from reliability or cheapness, and once you add the fact that most architectures are not totally homogeneous, the cost goes up even more.

                                                          That’s the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.

                                                          1. 2

                                                            “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.” “That’s the hill new shells have to climb,”

                                                            Or, like with the similar problem posed by C compilers, they just provide a method to extract to whatever the legacy shell is for widespread, standard usage.

                                                            EDIT: Just read comment by @ac which suggested same thing. He beat me to it. :)

                                                            1. 2

                                                              I’ve pondered about transpilers a bit before, for me personally, I’ve learned enough shell that it doesn’t really provide much benefit, but I like that idea a lot more then a distinct, non-compatible shell.

                                                              I very much prefer a two-way transpiler. Let me make my old code into new code, so I can run the new code everywhere and convert my existing stuff to the new thing, and let me go back to old code for the machines where I can’t afford to figure out how to get new thing working. That’s a really big ask though.

                                                              The way we solve this at $work is basically by writing lots of very small amounts of shell, orchestrated by another tool (ansible and Ansible Tower, in our case). This covers about 90% of the infrastructure, with the remaining bits being so old and crufty (and so resource-poor from an organization perspective) that bugs are often tolerated rather than fixed.

                                                          2. 4

                                                            The counter to alternative shells sounds more like a reason to develop and use alternative shells that coexist with a standard shell. Maybe even with some state synchronized so your playbooks don’t cause effects the preferred shell can’t see and vice versa. I think a shell like newlisp supporting a powerful language with metaprogramming sounds way better than bash. Likewise, one that supports automated checking that it’s working correctly in isolation and/or how it uses the environment. Also maybe something on isolation for security, high availability, or extraction to C for optimization.

                                                            There’s lots of possibilities. Needing to use stuff in a standard shell shouldn’t stop them. So, they should replace the standard shell somehow in a way that still lets it be used. I’m a GUI guy whose been away from shell scripting for a long time. So, I can’t say if people can do this easily, already are, or whatever. I’m sure experts here can weigh in on that.

                                                          3. 7

                                                            I work primarily in devops/application architecture – having alternative shells is just a big ol’ no – tbh I’m trying to ween myself off bash 4 and onto pure sh because I have to deal with some pretty old machines for some of our legacy products. Alternative shells are cool, but don’t scale well. They also present increased attack surface for potential hackers to privesc through.

                                                            I’m also an odd case, I think shell is a pretty okay language, wart-y, sure, but not as bad as people make it out to be. It’s nice having a tool that I can rely on being everywhere.

                                                            1. 13

                                                              I work primarily in devops/application architecture

                                                              Alternative shells are cool, but don’t scale well.

                                                              Non-ubiquitous shells are a little harder to scale, but the cost should be controllable. It depends on what kind of devops you are doing:

                                                              • If you are dealing with a limited number of machines (machines that you probably pick names yourself), you can simply install Elvish on each of those machines. The website offers static binaries ready to download, and Elvish is packaged in a lot of Linux distributions. It is going to be a very small part of the process of provisioning a new machine.

                                                              • If you are managing some kind of cluster, then you should already be doing most devops work via some kind of cluster management system (e.g. Kubernetes), instead of ssh’ing directly into the cluster nodes. Most of your job involves calling into some API of the cluster manager, from your local workstation. In this case, the number of Elvish instances you need to install is one: that on your workstation.

                                                              • If you are running some script in a cluster, then again, your cluster management system should already have a way of pulling in external dependencies - for instance, a Python installation to run Python apps. Elvish has static binaries, which is the easiest kind of external dependency to deal with.

                                                              Of course, these are ideal scenarios - maybe you are managing a cluster but it is painful to teach whatever cluster management system to pull in just a single static binary, or you are managing some old machines with an obscure CPU architecture that Elvish doesn’t even cross-compile to. However, those difficulties are by no means absolute, and when the benefit of using Elvish (or any other alternative shell) far outweighs the overheads, large-scale adoption is possible.

                                                              Remember that bash – or every shell other than the original bourne shell - also started out as an “alternative shell” and it still hasn’t reached 100% adoption, but that doesn’t prevent people from using it on their workstation, servers, or whatever computer they work with.

                                                              1. 4

                                                                All good points. I operate on a couple different architectures at various scales (all relatively small, Xe3 or so). Most of the shell I write is traditional, POSIX-only bourne shell, and that’s simply because it’s everywhere without any issue. I could certainly install fish or whatever, or even standardized version of bash, but it’s an added dependency that only provides moderate convenience at the cost of another ansible script to maintain, and increased attack surface.

                                                                The other issue is that ~1000 servers or so have very little in common with each other, about 300 of them support one application, that’s the biggest chunk, 4 environments of ~75 machines each, all more or less identical.

                                                                The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy. These are all legacy applications, none of them get any money for new work, they’re all total maintenance mode, any time I spend on them is basically time lost from the business perspective. I definitely don’t want to knock alternative shells as a tool for an individual contributor, but it’s ultimately a much simpler problem for me to say, “I’m just going to write sh” then “I’m going to install elvish across a gagillion arches and hope I don’t break anything”

                                                                We drive most cross-cutting work with ansible (that Xe3 is all vms, basically – not quite all, but like 98%), bash really comes in as a tool for debugging more than managing/maintaining. If there is an issue across the infra – say like meltdown/spectre, and I want to see what hosts are vulnerable, it’s really fast for me (and I have to emphasize – for me – I’ve been writing shell for a lot of years, so that tweaks things a lot) to whip up a shell script that’ll send a ping to Prometheus with a 1 or 0 as to whether it’s vulnerable, deploy that across the infra with ansible and set a cronjob to run it. If I wanted to do that with elvish or w/e, I’d need to get that installed on that heterogenous architecture, most of which my boss looks at as ‘why isn’t Joe working on something that makes us money.’

                                                                I definitely wouldn’t mind a better sh becoming the norm, and I don’t want to knock elvish, but from my perspective, that ship has sailed till it ports, sh is ubiquitous, bash is functionally ubiquitous, trying to get other stuff working is just a time sink. In 10 years, if elvish or fish or whatever is the most common thing, I’ll probably use that.

                                                                1. 1

                                                                  The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy.

                                                                  So, essentially, whatever alternative is built needs to use cross-platform design or techniques to run on about anything. Maybe using cross-platform libraries that facilitate that. That or extraction in my other comment should address this problem, eh?

                                                                  Far as debugging, alternative shells would bring both a cost and potential benefits. The cost is unfamiliarity might make you less productive since it doesn’t leverage your long experience with existing shell. The potential benefits are features that make debugging a lot easier. They could even outweigh cost depending on how much time they save you. Learning cost might also be minimized if the new shell is based on a language you already know. Maybe actually uses it or a subset of it that’s still better than bash.

                                                              2. 6

                                                                My only real beef with bash is its array syntax. Other than that, it’s pretty amazing actually, especially as compared with pre bash Bourne Shells.

                                                                1. 4

                                                                  Would you use a better language that compiles to sh?

                                                                  1. 1

                                                                    Eh, maybe? Depends on your definition of ‘better.’ I don’t think bash or pure sh are all that bad, but I’ve also been using them for a very long time as a daily driver (I write more shell scripts then virtually anything else, ansible is maybe a close second); so I’m definitely not the target audience.

                                                                    I could see if I wanted to do a bunch of math, I might need use something else, but if I’m going to use something else, I’m probably jumping to a whole other language. Shell is in a weird place, if the complexity is high enough to need a transpiler, it’s probably high enough to warrant writing something and installing dependencies.

                                                                    I could see a transpiler being interesting for raising that ceiling, but I don’t know how much value it’d bring.

                                                              3. 10

                                                                Could not disagree more. POSIX shell is unpleasant to work with and crufty; my shell scripting went through the roof when I realized that: nearly every script I write is designed to be launched by myself; shebangs are a thing; therefore, the specific language that an executable file is written in is very, very often immaterial. I write all my shell scripts in es and I use them everywhere. Almost nothing in my system cares because they’re executable files with the path to their interpreter baked in.

                                                                I am really pleased to see alternative non-POSIX shells popping up. In my experience and I suspect the experience of many, the bulk of the sort of scripting that can make someone’s everyday usage smoother need not look anything like bash.

                                                                1. 5

                                                                  Truth; limiting yourself to POSIX sh is a sure way to write terribly verbose and slow scripts. I’d rather put everything into a “POSIX awk” that generates shell code for eval when necessary than ever be forced to write semi-complex pure sh scripts.

                                                                  bash is a godsend for so many reasons, one of the biggest being process substitution feature.

                                                                  1. 1

                                                                    For my part, I agree – I try to generally write “Mostly sh compatible bash” – defaulting to sh-compatible stuff until performance or maintainability warrant using the other thing. Most of the time this works.

                                                                    The other mitigation is that I write lots of very small scripts and really push the worse-is-better / lots of small tools approach. Lots of the scripting pain can be mitigated by progressively combining small scripts that abstract over all the details and just do a simple, logical thing.

                                                                    One of the other things we do to mitigate the slowness problem is to design for asynchrony – almost all of the scripts I write are not time-sensitive and run as crons or ats or whatever. We kick ‘em out to the servers and wait the X hours/days/whatever for them to all phone home w/ data about what they did, work on other stuff in the meantime. It really makes it more comfortable to be sh compatible if you can just build things in a way such that you don’t care if it takes a long time.

                                                                    All that said, most of my job has been “How do we get rid of the pile of ancient servers over there and get our assess to a disposable infrastructure?” Where I can just expect bash 4+ to be available and not have to worry about sh compatibility.

                                                                  2. 1

                                                                    A fair cop, I work on a pretty heterogenous group of machines, /bin/sh works consistently on all of them, AIX, IRIX, BSD, Linux, all basically the same.

                                                                    Despite our (perfectly reasonable) disagreement, I am also generally happy to see new shells pop up. I think they have a nearly impossible task of ousting sh and bash, but it’s still nice to see people playing in my backyard.

                                                                  3. 6

                                                                    I don’t think you can disqualify a shell just because it’s not POSIX (or “the same”, or whatever your definition of “shell” is). The shell is a tool, and like all tools, its value depends on the nature of your work and how you decide to use it.

                                                                    I’ve been using Elvish for more than a year now. I don’t directly manage large numbers of systems by logging into them, but I do interact quite a bit with services through their APIs. Elvish’s native support for complex data structures, and the built-in ability to convert to/from JSON, makes it extremely easy to interact with them, and has allowed me to build very powerful toolkits for doing my work. Having a proper programming language in the shell is very handy for me.

                                                                    Also, Elvish’s interactive experience is very customizable and friendly. Not much that you cannot do with bash or zsh, but much cleaner/easier to set up.

                                                                    1. 4

                                                                      I’ve replied a bunch elsewhere, I don’t mean to necessarily disqualify the work – it definitely looks interesting, and for an individual contributor somewhere who doesn’t have to manage tools at scale, or interact with tools that don’t speak the JSON-y api it offers, etc – that’s where it starts to get tricky.

                                                                      I said elsewhere in thread, “That’s [the ubiquity of sh-alikes] the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.”

                                                                      I’d be much more interested if elvish was a superset of sh or bash. I think that part of the reason bash managed to work was that sh was embedded underneath, it was a drop-in replacement. If you’re a guy who, like me, uses a lot of shell to interact with systems, adding new features to that set is valuable, removing old ones is devastating. I’m really disqualifying (as much as I am) on that ground, not just that it’s not POSIX, but that it is less-than-POSIX with the same functionality. That keeps it out of my realm.

                                                                      Now this may be biased, but I think I’m the target audience in terms of adoption – you convince a guy like me that your shell is worth it, and I’m going to go drop it on my big pile of servers where-ever I’m working. Convincing ICs who deal with their one machine gets you enough adoption to be a curiousity, convince a DevOps/Delivery guy and you get shoved out to every new machine I make and suddenly you’ve got a lot of footprint that someone is going to have to deal with long after I’m gone and onto Johhny Appleshelling the thing at whatever poor schmuck hires me next.

                                                                      Here’s what I’d really like to see, a shell that offers some of these JSON features as an alternative pipe (maybe ||| is the operator, IDK), adds some better numbercrunching support, and maybe some OO features. All while remaining a superset of POSIX. That’d make the cost of using it very low, which would make it easy to justify adding to my VM building scripts. It’d make the value very high (not having to dip out to another tool to do some basic math would be fucking sweet,), having OO features so I could operate on real ‘shell objects’ and JSON to do easier IO would be really nice as well. Ultimately though you’re fighting uphill against a lot of adoption and a lot of known solutions to these problems (there are patterns for writing shell to be OOish, there’s awk for output processing, these are things which are unpleasant to learn, but once you do, the problem JSON solves drops to a pretty low priority).

                                                                      I’m really not trying to dismiss the work. Fixing POSIX shell is good work, it’s just not likely to be successful by replacing. Improving (like bash did) is a much better route, IMO.

                                                                    2. 2

                                                                      I’d say you’re half right. You’ll always need to use sh, or maybe bash, they’re unlikely to disappear anytime soon. However, why limit yourself to just sh when you’re working on your local machine? You could even take it a step further and ask why are you using curl locally when you could use something like HTTPie instead? Or any of the other “alternative tools” that make things easier, but are hard to justify installing everywhere. Just because a tool is ubiquitous does not mean it’s actually good, it just means that it’s good enough.

                                                                      I personally enjoy using Elvish on my local machines, it makes me faster and more efficient to get things done. When I have to log into a remote system though I’m forced to use Bash, it’s fine and totally functional, but there’s a lot of stupid things that I hate. For the most ridiculous and trivial example, bash doesn’t actually save it’s history until the user logs out, unlike Elvish (or even IPython) which saves it after each input. While it’s a really minor thing, it’s really, really, really useful when you’re testing low level hardware things that might force an unexpected reboot or power cycle on a server.

                                                                      I can’t fault you if you want to stay POSIX, that’s a personal choice, but I don’t think it’s fair to write off something new just because there’s something old that works. With that mindset we’d still be smashing two rocks together and painting on cave walls.

                                                                    1. 2

                                                                      What makes you think apple or any of the apps you’d use on apple are any better or worse? Because apple claims platitudes about how their business model isn’t “ad based?” It’s utterly ridiculous; any company that isn’t building user advertising profiles right now is losing. Apple may not have as mature of a data collection process or team, but they are doing it.

                                                                      1. 1

                                                                        Did you miss the part about the Digital Content Next paper?

                                                                        1. 2

                                                                          I did, and the experimental setup is not very well described–literally device running idle with either chrome or safari in focus or “consumption of google services.” Most of the paper talks about Google specifically with little to no investigation into Apple’s data collection affiliations. They also only filtered traffic to identified google and apple service endpoints–I’m assuming the ones enumerated in the appendix. and all the paper showed was that Apple was in many ways doing similar data collection but at much lower volumes.

                                                                          Apple is an advertising company, they depend on understanding consumer wants and needs to sell new iPhones and other high margin consumer goods.

                                                                      1. 2

                                                                        I’m not sure I understand what this guy was going on about. How is it problematic to buy used garbage off eBay with public knowledge? Is the author implying governments go on eBay to buy used voting machines for elections? that would be problematic I suppose.

                                                                        Overall, making a voting machine truly secure from tampering would be more expensive than just running a voting webapp. Most of the real security problems come from the fact that votes have to be anonymous as well.

                                                                        1. 1

                                                                          Have you even read the article?

                                                                          It said that he bought the voting machines in 2 different time periods and they were basically the same, meaning that someone could have bought an “old” one, made a script to change all the votes before his and get away with it, on the next election.

                                                                          1. 0
                                                                            1. I did.
                                                                            2. “Basically the same” does not mean “the same”.
                                                                            3. Why should a company retool their factory and change up their supply chain for a product that is adequate for the job? There is no reason why these things would ever need a hardware refresh unless the supply chains for current components become obsoleted and expensive.
                                                                            4. I don’t see any proof that local municipalities are buying used voting machines, only dubious implications.

                                                                            EDIT: even if they were buying used voting machines, I imagine there are trustworthy vendors and they aren’t buying form eBay.

                                                                        1. 5

                                                                          probably better off using shellcheck than -n https://www.shellcheck.net/

                                                                          1. 4

                                                                            I occasionally use ed when I need to delete a specific line of a file by number. Most commonly, I suppose, when I need to clear out a specific entry in ~/.ssh/known_hosts.

                                                                            The other time I reach for it, on occasion, is when I want the paper teletype-like property it provides; i.e., if I need to make a small edit in a file and I want that to appear in my scrollback for whatever reason. It’s hard to have a specific sequence of edits appear in your terminal history with a visual editor.

                                                                            I recognise that I’m somewhat on the fringe, though.

                                                                            1. 4

                                                                              ssh-keygen -R host is probably the better tool to use than ed for that.

                                                                            1. 2

                                                                              If you want the other way (i.e., talk to a mercurial rrepo using git CLI), use git cinnabar

                                                                              1. 2

                                                                                But… why would you ever want that…!?

                                                                                1. 4

                                                                                  Because you already know git and have to use mercurial?

                                                                                  1. 3

                                                                                    Mozilla uses Mercurial for Firefox. Lots of People know git. They use this to contribute anyway. (This includes me. Ask me anything :-))

                                                                                    1. 1

                                                                                      If it’s powerful enough to heavily abstract the svn weirdness of mercurial into a purely git workflow, that’s a huge win in it of itself.

                                                                                  1. 5

                                                                                    It is still useful in shell scripting though, given it can edit a file in-place.

                                                                                    1. 3

                                                                                      I only recently discovered the use of ed or ex in scripting and it has been a life saver. I was trying to do complex edits with sed and awk, which was hard enough even before you consider portability. The common answer seemed to be Perl, but I wasn’t really happy with that either.

                                                                                    1. 0

                                                                                      In all honesty, just man bash or whatever shell you have. most of these articles generally just publish what is very easy to find in bash’s manpage.

                                                                                      1. 2

                                                                                        Author suggests using minimalist shells such as dash or busybox for everyday use. Anyone here doing this?

                                                                                        1. 6

                                                                                          I’m not sure s/he does - it suggests using a basic shell for scripts.

                                                                                          1. 1

                                                                                            good point

                                                                                          2. 3

                                                                                            I use mksh, which is like “modern ksh” (actually, a MirBSD Korn Shell) for me.

                                                                                            1. 2

                                                                                              For scripts? Of course! If you use a big, fancy shell to run your scripts you can more easily accidentally use a non-standard feature. Also, the big, fancy shells are noticeably slower.

                                                                                              1. 2

                                                                                                /bin/sh on FreeBSD, ash-derivative.

                                                                                                1. 1

                                                                                                  god no. unless your shell use cases are exceedingly simplistic, writing pure POSIX shell scripts is tedious.

                                                                                                  bash out of the box has useful things like:

                                                                                                  • parameter transformations like @Q which makes it easy to safely eval values.
                                                                                                  • pattern substitution
                                                                                                  • an ERE regexp engine
                                                                                                  • hash maps and arrays
                                                                                                  • mapfile
                                                                                                  • compgen

                                                                                                  and a lot more.

                                                                                                  1. 2

                                                                                                    My ‘usual’ response to this type of thing is:

                                                                                                    Bash is not universal, and has ‘interesting’ behaviour in various versions.

                                                                                                    If you find POSIX shell is too simplistic (or the program is too complex), it’s likely that you just need to use a more complex language, not rely on Bash and call it a shell script.

                                                                                                1. 4

                                                                                                  Or maybe shells were pretty much built to be primitive User interfaces and not serious “typed” languages. Stop trying to see shells as “scripting” and a lot of your problems pretty much become secondary.

                                                                                                  Seriously, just use a REAL scripting language, then you can readdir() or whatever without having to depend on ls and all the weird ways you can invoke it.

                                                                                                  1. 4

                                                                                                    While I understand your argument, you should consider that we use the term “script” instead of, say, “command” exactly because scripts were originally a sequence of shell commands saved in a file for future convenience.

                                                                                                    So there is a continuous between the glue provided by a shell and a full interpreted programming language.

                                                                                                    Shell were designed to glue small programs providing specific features into larger ones.

                                                                                                    Interpreted language are designed to write larger programs in the first place, by composing the available libraries that provide the specific features.