1. 36

    Such irony in the title here–“open source” is not about you; it’s a movement to hijack the free software movement and turn it into something a company can profit from, riding on free software goodwill and stripping the political aspects that are hard to reconcile with shameless capitalism.

    I don’t think it’s what Rich meant here, but it does nicely serve to underscore the vast gulf between the oss and free software camps; if you are in software because you want to make the world a better place, move right along.

    1. 25

      it’s a movement to hijack the free software movement

      There’s a problem with this statement, it doesn’t apply to me.

      When I was open-sourcing my project I wasn’t joining any movement. I didn’t sign any contract. I use the words “open source” in a plain sense: this is a source code that someone can get and use according to the posted license. I’m totally fine with any company making profit off of this code. No company ever indoctrinated me into thinking this, and I deliberately chose BSD license over GPL exactly to not having to be associated with Free Software movement (I don’t hate it, I just didn’t want to). Yes, for real. People like me exist.

      What I’m saying is, we already have a term meaning “open source + a particular ideology”. It’s Free Software. Please don’t try to appropriate “open source” to mean anything more than “available source code”. And no, I don’t really care what OSI thinks about this “term”. It’s their idea, not mine. I need some words to describe what I’m doing, too.

      1. 9

        When I was open-sourcing my project I wasn’t joining any movement

        That’s exactly the difference between the “free software” movement and Open Source. You made @technomancy’s point for him.

        1. 1

          It’s contradicting the framing that he’s somehow been duped out of believing in the fsf’s ideology by an open source movement.

        2. 9

          P.S. In fact, there was a time when “Free Software” also wasn’t associated with not letting companies profit from it. Here’s a classic Mark Pilgrim on this: https://web.archive.org/web/20091102023737/http://diveintomark.org/archives/2009/10/19/the-point

          Part of choosing a Free license for your own work is accepting that people may use it in ways you disapprove of.

          1. 5

            Check Selling Free Software from 1996.

            1. 6

              I came here to share this link. the GPL, and free software, was never about gratis, was never about not paying for software. It has always been about liberty and the freedom to control one’s own software.

            2. 3

              2009 is classic? Am I old?

              1. 1

                “Classic” in a sense “explains well”, has nothing to do with being old :-)

            3. 5

              Just because you use a term doesn’t mean you get to define it. Saying “I don’t care what OSI thinks or why the term was invented” seems pretty strange to me… it’s their term and has a history, like it or not.

              1. 8

                What word should I use if I publish source code so people can use it but don’t care about furthering the cultural revolution?

                1. 5

                  “Open source”.

                  1. 1

                    Billionaire. In a historical interview, that’s what the CEO of Apple believed he’d become if a lot of things lined up, one being getting a whole, networking stack for free from BSD developers. The other thing he envisions is them begging for money at some point so their projects don’t close down. He bragged his main competition would be contributing their fixes back since they got themselves stuck with la licence de la révolution. Attendees were skeptical about such a one-sided deal going down.

                  2. 4

                    No :-) The only way a natural languages is defined is through use, and the most common usage becomes a definition. OSI didn’t make this term theirs by simply publishing their definition, they just joined the game and have as much weight in it as every single user of the word.

                    1. 4

                      True, but also like it or not language evolves over time (always to the chagrin of many). This is not unique to technology or English. At the end of the day it doesn’t matter what either OSI or /u/isagalaev thinks, society at large makes the definitions.

                      Having said that, if you step outside of the FOSS filter bubble, it seems pretty clear to me that society leans towards /u/isagalaev’s definition.

                      1. 3

                        Also, as a sensible dictionary would, Merriam-Webster defines both current interpretations of it: https://www.merriam-webster.com/dictionary/open-source

                    2. 4

                      we already have a term meaning “open source + a particular ideology”. It’s Free Software.

                      You can’t remove politics from this question; the act of pretending you can is in itself a political choice to support the status quo.

                      1. 2

                        You can remove “politics” from open source, and that is precisely what open source has done.

                        The term open source can be operationally defined (i.e., descriptive, constructed, and demonstrable). From Wikipedia, citing the book “Understanding Open Source & Free Software Licensing.” (Though feel free to use Merriam Webster or the OED as a substitute): “source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose.”

                        The license terms are selected that most parsimoniously accomplish the stated definition. (i.e., make it possible for the stated definition to become externally correspondent and existentially possible). The fewest number of rules (formula, statements, decisions) possible to accomplish the work–producing a limited number of legal operations (rights, grants, privileges) that can be fully accounted for.

                        It is the deflationary nature of the process that removes “politics.” Making the license commensurable and testable while removing suggestion, loading, framing, or overloading. BSD/MIT are small and shrinking, whereas GPL 2/3 are large and growing. That’s the difference.

                        1. 2

                          “source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose.”

                          You can still get patent sued for that due to laws paid for by lobbyists. The effects of politicians on what we can and can’t do with open-source mean it’s inherently political. The people who say they want its benefits with no interest in politics or whose licenses don’t address it are still involved in a political game: they’re just not players in it.

                          1. 1

                            I’m not sure why do you think I’m trying to “remove politics”. Of course I do have some political view on this, however vague it might be. This is totally beside the point. The point is that I don’t want to proclaim/discuss my political views every time I want to say that the code is available. It’s a completely valid desire.

                          2. 1

                            Why BSD license over public domain? The latter makes the source code more “available”, does it not?

                            (If you wonder how I feel about the GPL, check my repos.)

                            1. 11

                              The latter makes the source code more “available”, does it not?

                              No. In jurisdictions that don’t recognise public domain (e.g. France) and in which authors cannot give up their copyright, giving it to the public domain is meaningless and it’s as if the code has no free license at all. It’s the same as “all rights reserved”.

                              1. 2

                                That’s very interesting. Would folks in such jurisdictions be interested in working together with others to reform copyright law? Perhaps among .. other things?

                                1. 2

                                  Why? It’s a different branch of copyright law and the idea of authorship being something you cannot give up is fundamental to those. You can only perpetually license.

                                  CC0 is a great license to use in those cases, btw.

                                  1. 2

                                    Why?

                                    One reason being that some people think copyright, or perhaps even more generally, intellectual property, is unethical. Another reason could be a desire for a single simple concept of “public domain,” perhaps similar to what we have in the US.

                              2. 1

                                I like the idea of retaining an exclusive right to the project’s name, BSD is explicit about it.

                            2. 10

                              Companies are profiting massively from both. The License Zero author figured out the reason is the FOSS authors focused on distribution methods instead of results. That’s why Prosperity straight up says commercial use like many non-free licenses mention. The other one says any change has to be submitted back.

                              The license needs to explicitly mention them making money or sharing all changes to achieve what you’re describing. That plus some patent stuff. The “free” licenses trying to block commercial exploitation are neither believably free nor stopping commercial exploitation after companies like IBM (massive capitalist) bet the farm on them. I mean, the results should prove they dont work for such goals but people keep pushing old ways to achieve them.

                              Nope. Just reinforcing existing systems of exploitation by likes of IBM. We need new licenses that send more money and/or code improvements back.

                              1. 3

                                It should not be the job of a license enforced by copyright to extract rents. That’s the playbook we are fleeing.

                                1. 2

                                  ““open source” is not about you; it’s a movement to hijack the free software movement and turn it into something a company can profit from”

                                  The commenter wrote as if they expected whatever license or philosophy was in use to prevent companies from using the software for profit or with exploitation central focus. Several companies are making billions leveraging FOSS software. One even lobbies against software freedom using patent law since suits won’t affect it. So, if the goal is stopping that and spreading software freedom, then the so-called “free” licenses aren’t working. Quite the opposite effect moving billions into the hands of the worst, lobbying companies imaginable.

                              2. 2

                                I just don’t see “open-source” being an hijack of “free software” for corporate purposes. Why would corporate care, they can exploit the free labor of free software just as much, the politics are not visible in the final software product. If anything, it seems like the social goals of free software have been diluted by other programmers who like the technical side of it, but neither care or agree about the politics.

                                1. 3

                                  Why would corporate care, they can exploit the free labor of free software just as muc

                                  Depends on the market. If it’s software they sell directly, the copyleft requirement means they have to give up their changes. Those changes might be generating the customers. They might also be causing lock-in. Better for them to keep their changes secret.

                                  Your point remains if it’s anything that lets them dodge the part about returning changes, esp SaaS.

                                  1. 3

                                    I just don’t see “open-source” being an hijack of “free software” for corporate purposes.

                                    It’s not really a matter of opinion. That hijacking is exactly what happened in 1998. The fact that today you forgot that this is what happened means that it worked: you stopped thinking about free software, as the OSI intended to happen in 1998.

                                    OSI was created to say “open source, open source, open source” until everyone thought it was a natural term, with the goal of attracting corporate interests. They even called it an advertising campaign for free software. Their words, not mine.

                                1. 15

                                  Your thinkpad is shared infrastructure on which you run your editor and forty-seven web sites run their javascripts. If that a problem for you?

                                  1. 2

                                    Mmm what did you mean by this? I didn’t get it.

                                    1. 13

                                      In We Need Assurance, Brian Snow summed up much of the difficulty securing computers:

                                      “The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys’ stuff!

                                      So today, making a computer secure requires imposing a “separation paradigm” on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels. We really need to focus on making a secure computer, not on making a computer secure – the point of view changes your beginning assumptions and requirements! “

                                      Although security features were added, the fact that many things are shared and closer together only increased over time to meet market requirements. Then, researchers invented hundreds of ways to secure code and OS kernels, Not only were most ignored, the market shifted to turning browsers into OS’s running a malicious code in a harder-to-analyze language whose compiler (JIT) was harder to secure due to timing constraints. Only a handful of projects in high-security, like IBOS and Myreen, even attempted it. So, browsers running malicious code are a security threat in a lot of ways.

                                      That’s a subset of two, larger problems:

                                      1. Any code in your system that’s not verified to have specific safety and security properties might be controlled by attackers upon malicious input.

                                      2. Any shared resource might leak your secrets to a malicious observer via covert channels, storage or timing. Side channels are basically the same concept applied more broadly, like in physical world. Even the LED’s on your PC might internal state of the processor depending on design.

                                      1. 2

                                        Hmm. I had a friend yonks ago who worked on BAE’s STOP operating system, that supposedly uses complex layers of buffers to isolate programs. I wonder how it’s stood up against the many CPU vulnerabilities.

                                        1. 4

                                          I’ve been talking about STOP for a while but rarely see it. Cool you knew someone that worked on it. Its architecture is summarized here along with GEMSOS’s. I have a detailed one for GEMSOS tomorrow, too, if not previously submitted. On the original implementation (SCOMP), the system also had an IOMMU that integrated with the kernel. That concept was re-discovered some time later.

                                          Far as your question, I have no idea. These two platforms, along with SNS Server, have had no reported hacks for a long time. You know they have vulnerabilities, though. The main reasons I think the CPU vulnerabilities will effect them is (a) they’re hard to avoid and (b) certification requirements mean they rarely change these systems. They’re probably vulnerable, esp to RAM attacks. Throw network Rowhammer at them. :)

                                        2. 2

                                          Thanks, that was really interesting and eye opening on the subject. I never saw it that way! :)

                                        3. 5

                                          I think @arnt is saying that website JavaScript can exploit CPU bugs, so by browsing the internet you are “shared infrastructure”.

                                          1. 6

                                            Row Hammer for example had a JavaScript implementation, and Firefox (and others) have introduced mitigations to prevent those sorts of attacks. Firefox also introduced mitigations for Meltdown and Spectre because they could be exploited from WASM/JS… so it makes sense to mistrust any site you load on the internet, especially if you have an engine that can JIT (but all engines are suspect; look at how many pwn2own wins are via Safari or the like)

                                            1. 3

                                              If browsers have builtin mitigation for this sort of thing, isn’t this an argument in favor of disabling the OS-level mitigation? Javascript is about the only untrusted code that I run on my machine so if that’s already covered I don’t see a strong reason to take a hit on everything I run.

                                              1. 4

                                                I think the attack surface is large enough even with simple things like JavaScript that I’d be willing to take the hit, though I can certainly understand certain workloads where you wouldn’t want to, like gaming or scientific computing.

                                                For example, JavaScript can be introduced in many locations, like PDFs, Electron, and so on. Also, there are things like Word documents such as this RTF remote code execution for MS Word. Additionally, the mitigations for Browsers are just that, mitigations; things like retpolines and the like work in a larger setting with more “surface area” covered, vs timing mitigations or the like in browsers. It’s kinda like W^X page protections or ASLR: the areas you’d need that are quite small, but it’s harder to find individual applications with exploits and easier to just apply it wholesale to the entire system.

                                                Does that make sense?

                                                tl;dr: JS is basically everywhere in everything, so it’s harder to just apply those fixes in a single location like a browser, when other things may have JS exposed as well. Further more, there are other languages, attack surfaces, and the like I’d be concerned about that it’s just not worth it to only rely on browsers, which can only implement partial mitigations.

                                                1. 1

                                                  Browsers do run volatile code supplied by others more than most other attack surfaces. You may have an archive of invoices in PDF format, as I have, and those may in principle contain javascript, but those javascripts aren’t going to change all of a sudden, and they all originate from a small set of parties (in my case my scanning software and a single-digit number of vendors). Whereas example.com may well redeploy its website every Tuesday morning, giving you a the latest versions of many unaidited third-party scripts, and neither you nor your bank’s web site really trust example.com or its many third-party scripts.

                                                  IMO that quantitative difference is so large as to be described as qualitative.

                                                  1. 1

                                                    The problem is when you bypass those protections you can have things like this NitroPDF exploit, which uses the API to launch malicious JS. I’ve used these sorts of exploits on client systems during assessments, adversarial or otherwise. So relying on one section of your system to protect you against something that is a fundamental CPU design flaw can be problematic; there’s nothing really stopping you from launching rowhammer from PostScript itself, for example. This is why the phrase “defense in depth” is so often mentioned in security circles, since there can be multiple failures throughout a system, but in a layered approach you can catch it at one of the layers.

                                                    1. 1

                                                      Oh, I’m not arguing that anyone should leave out everything except browser-based protection. Defense in depth is indisputably good.

                                                2. 3

                                                  There’s also the concept of layers of defense. Let’s say the mitigation fails. Then, you want the running, malicious code to be sandboxed somehow by another layer of defense. You might reduce or prevent damage. The next idea folks had was mathematically-prove the code could never fail. What if a cosmic ray flips a bit that changes that? Uh oh. Your processor is assumed to enable security, you’re building an isolation layer on it, make it extra isolated just in case shared resources have effect, and now only one of Spectre/Meltdown affected you if you’re Muen. Layers of security are still good idea.

                                              2. 2

                                                That’s not what I got from it. I perceived it as “You’re not taking good precautions on this low hanging fruit, why are you worried about these hard problems?”

                                                I see it constantly, everyone’s always worried about X, and then they just upload everything to an unencrypted cloud.

                                                1. 1

                                                  I actually did mean that when you browse the net, your computer runs code supplied by web site operators you may not trust, and some of those web site operators really are not trustworthy, and your computer is shared infrastructure running code supplied by users who don’t trust each other.

                                                  Your bank’s site does not trust those other sites you have open in other tabs, so that’s one user who does not trust others.

                                                  You may not trust them, either. A few hours after I posted that, someone discovered that some npmjs package with millions of downloads has been trying to steal bitcoin wallets, so that’s millions of pageviews that ran malevolent code on real people’s computers. You may not have reason to worry in this case, but you cannot trust sites to not use third-party scripts, so you yourself also are a distrustful user.

                                            2. 2

                                              This might be obvious, but I gotta ask anyway: Is there a real threat to my data when I, let’s say, google for a topic and open the first blog post that seems quite right?

                                              • Would my computer be breached immediately (like I finished loading the site and now my computers memory is in north korea)?
                                              • How much data would be lost, and would the attacker be able to read any useful information from it?
                                              • Would I be infected with something?

                                              Of course I’m not expecting any precise numbers, I’m just trying to get a feel for how serious it is. Usually I felt safe enough just knowing which domains and topics (like pirated software, torrents, pron of course) to avoid, but is that not enough anymore?

                                              1. 5

                                                To answer your questions:

                                                Would my computer be breached immediately (like I finished loading the site and now my computers memory is in north korea)?

                                                Meltdown provides read-access to privileged memory (including enclave-memory) at rates of a couple of megabits per second (lets assume 4). This means that if you have 8GB of ram it is now possible to dump the entire memory of your machine in about 4,5 hours.

                                                How much data would be lost, and would the attacker be able to read any useful information from it?

                                                This depends on the attackers intentions. If they are smart, they just read the process table, figure out where your password-manager or ssh-keys for production are stored in ram and transfer the memory-contents of those processes. If this is automated, it would take mere seconds in theory, in practice it won’t be that fast but it’s certainly less than a minute. If they dump your entire memory, it will probably be all data in all currently running applications and they will certainly be able to use it since it’s basically a core dump of everything that’s currently running.

                                                Would I be infected with something?

                                                Depends on how much of a target you are and whether or not the attacker has the means to drop something onto your computer with the information gained from what I described above. I think it’s safe to assume that they could though.

                                                These attacks are quite advanced and regular hackers will always go for the low-hanging fruit first. However if you are a front-end developer in some big bank, big corporation or some government institution which could face a threat from competitors and/or economic espionage. The answer is probably yes. You are probably not the true target the attackers are after, but you system is one hell of a springboard towards their real target.

                                                It’s up to you to judge how much of a potential target you are, but when it happens, you do not want to be that guy/girl with the “patient zero”-system.

                                                Usually I felt safe enough just knowing which domains and topics (like pirated software, torrents, pron of course) to avoid, but is that not enough anymore?

                                                Correct. Is not enough anymore, because Rowhammer, Spectre and Meltdown have JavaScript or wasm variants (If they didn’t we wouldn’t need mitigations in browsers). All you need is a suitable payload (the hardest part by far) and one simple website you frequently visit, which runs on an out-of-date application (like wordpress, drupal or yoomla for example) to get that megabit-memory-reading meltdown-attack onto a system.

                                                The attacker still has to know which websites those are, but they could send you a phishing-mail which has a link or some attachment that will be opened in some environment which has support for javascript (or something else) to obtain your browsing history. In that light it’s good to know that some e-mail clients support the execution of javascript in received e-mail messages

                                                If there is one lesson to take home from rowhammer, spectre and meltdown, it’s that there is no such thing as “computer security” anymore and that we cannot rely on the security-mechanisms given to us by the hardware.

                                                If you are developing sensitive stuff, do it on a separate machine and avoid frameworks, libraries, web-based tools, other linked in stuff and each and every extra tool like the plague. Using an extra system, abandoning the next convenient tool and extra security precautions are annoying and expensive, but it’s not that expensive if your livelihood depends on it.

                                                The central question is: Do you have adversaries or competitors willing to go this far and spend about half a million dollars (my guesstimate of the required budget) willing to pull off an attack like this?

                                                1. 1

                                                  Wow, thanks! Assuming you know what you’re talking about, your response is very useful and informative. And exactly what I was looking for!

                                                  […] figure out where your password-manager or ssh-keys for production are stored in ram […]

                                                  That is a vivid picture of the worst thing I could imagine, albeit I would only have to worry about my private|hobby information and deployment.

                                                  Thanks again!

                                                  1. 1

                                                    You’re welcome!

                                                    I have to admit that what I wrote above, is the worst case scenario I could come up with. But it is as the guys from Sonatype (from the Maven Nexus repository) stated it once: “Developers have to become aware of the fact that what their laptops produce at home, could end up as a critical library or program in a space station. They will treat and view their infrastructure, machines, development processes and environments in a fundamentally different way.”

                                                    Yes, there are Java programs and libraries from Maven Central running in the ISS.

                                                2. 1

                                                  The classic security answer to that is that last years’s theoretical attack is this year’s nation-state attack and next year it can be carried out by anyone who has an midprice GPU. Numbers change, fast. Attacks always get better, never worse.

                                                  I remember seeing an NSA gadget for $524000 about ten years ago (something to spy on ethernet traffic, so small as as be practically invisible), and recently a modern equivalent for sale for less than $52 on one of the Chinese gadget sites. That’s how attacks change.

                                              1. 12

                                                If you have to have a non-free firmware (and you do) I’d rather it be made by Apple instead of “Xiaomi”.

                                                Any layers of free software you add on top of that non-free foundation can never erase the fundamental truth that you don’t control your device and are therefore in the business of selecting a company to trust. And I don’t think there is much “carefully selecting a company to trust” going on here.

                                                I would love if things had gone a different way — I’d buy an iStallman phone in a heartbeat — but that’s water under the bridge.

                                                1. 10

                                                  The firmware isn’t really made by Xiaomi. It’s all Qualcomm. I’m not sure if Qualcomm even shows their source code to the actual phone vendors.

                                                  1. 4

                                                    theres no good option for a cell phone to talk on, but there is this:

                                                    https://pyra-handheld.com/boards/pages/pyra/

                                                    1. 1

                                                      Does it make sense for them to track at the firmware level, considering that the vast majority of their users is okay with having it in userspace?

                                                      1. 1

                                                        Not sure. Better question: Would you trust them not to if there was a profitable and/or convenient reason for doing so?

                                                    1. -1

                                                      It appears like every other Unix shell, it is next to impossible to pipe stdout and stderr independently of each other.

                                                      1. 9

                                                        ?

                                                        bash:

                                                        $ fn() { echo stdout; echo stderr >&2; }
                                                        $ fn 2> >(tr a-z A-Z >&2) | sed 's/$/ was this/'
                                                        STDERR
                                                        stdout was this
                                                        

                                                        Perhaps one could argue the syntax is somewhat cumbersome, but far from impossible…

                                                        1. 3

                                                          dash / POSIX sh:

                                                          $ fn() { printf 'stdout\n'; printf 'stderr\n' >&2; }        
                                                          $ fn
                                                          stdout
                                                          stderr
                                                          $ fn 2>/dev/null
                                                          stdout
                                                          $ fn >/dev/null
                                                          stderr
                                                          $ (fn 3>&1 1>&2 2>&3) | tr a-z A-Z  
                                                          stdout
                                                          STDERR
                                                          $ ( ((fn 3>&1 1>&2 2>&3) | tr a-z A-Z) 3>&1 1>&2 2>&3 ) | sed -e 's/std//'
                                                          STDERR
                                                          out
                                                          
                                                          1. 1

                                                            Yes, but I never understood the whole “shuffle file descriptors” thing in sh. I mean, why can’t I do:

                                                            $ make |{ tr a-z A-Z > stdoutfile } 2| more

                                                            What does “3>&1 1>&2 2>&3” even mean? That last example I can’t even make sense of.

                                                            Then again, I don’t manage a fleet of machines—I’m primarily a developer (Unix is my IDE) and really, my only wish is a simple way to pipe stderr to a program like more. And maybe a sane syntax for looping (as I can never remember if it’s end or done and I get it wrong half the time).

                                                            1. 1

                                                              Think of it as variable assignment. Descriptor3 = Descriptor1; Descriptor2 = ..., so it’s just a three way swap of stderr and stdout.

                                                              If you want to be strict about it, the second to last example is incomplete as “stdout” was printed on stderr and “STDERR” was printed on stdout. In the last example the swap is reversed, so that I can run sed on the “real” stdout.

                                                              If you wonder why the order of the two output lines did change: it was never guaranteed to be in any order.

                                                              1. 1

                                                                Why? It seems pointless. And that still doesn’t do what I would like to do—pipe stdout and stderr to separate programs per my made-up example.

                                                        2. 5

                                                          Not only it is possible, but it’s also possible to send/receive data on multiple, arbitrary file descriptors, unlike with POSIX shell (dunno about bash). For example:

                                                          pout = (pipe)
                                                          perr = (pipe)
                                                          run-parallel {
                                                            some_command > $pout 2> $perr
                                                            pwclose $pout
                                                            pwclose $perr
                                                          } {
                                                            cat < $pout >&2
                                                            prclose $pout
                                                          } {
                                                            cat < $perr
                                                            prclose $perr
                                                          }
                                                          
                                                          1. 3

                                                            Just to complement what @nomto said, note that in Elvish this can be easily encapsulated in a function (see https://github.com/zzamboni/elvish-modules/blob/master/util.org#parallel-redirection-of-stdoutstderr-to-different-commands-1), so you can then do something like:

                                                            > pipesplit { echo stdout-test; echo stderr-test >&2 } { echo STDOUT: (cat) } { echo STDERR: (cat) }
                                                            STDOUT: stdout-test
                                                            STDERR: stderr-test
                                                            
                                                            1. 1

                                                              Bash can sorta do it. They still need to be backed by “real” “files”, so you’d have to do mkfifo to get close to what your example is.

                                                              1. 2

                                                                That’s rather clunky, having to create a fifo means that you may leak an implementation detail of a script. I was stumped by this when I wanted to use gpg --passphrase-fd to encrypt data from STDIN: having to go through a fifo a security risk in that case.

                                                          1. 9

                                                            Want to find the magical ffmpeg command that you used to transcode a video file two months ago?

                                                            Just dig through your command history with Ctrl-R. Same key, more useful.

                                                            (To be fair, you can do this in bash with history | grep ffmpeg, but it’s far fewer keystrokes in Elvish :)

                                                            Sorry, what? Bash has this by default as well (At least in Ubuntu, and every other Linux distribution I’ve used). ^r gives autocomplete on history by the last matching command.

                                                            1. 10

                                                              I hoped I had made it clear by saying “same key”. The use case is that you might have typed several ffmpeg commands, and with bash’s one-item-at-a-time ^R it is really hard to spot the interesting one. Maybe I should make this point clearer.

                                                              1. 6

                                                                That’s handy, but it is easy to add this to bash and zsh with fzf:

                                                                https://github.com/junegunn/fzf#key-bindings-for-command-line

                                                                With home-manager and nix, enabling this functionality is just a one-liner:

                                                                https://github.com/danieldk/nix-home/blob/f6da4d02686224b3008489a743fbd558db689aef/cfg/fzf.nix#L6

                                                                I like this approach, because it follows the Unix approach of using small orthogonal utilities. If something better than fzf comes out, I can replace it without replacing my shell.

                                                                Structured data in pipelines seems very nice though!

                                                                1. 1

                                                                  What exactly does programs.fzf.enableBashIntegration do? I just enabled it, and it seems to have made no difference.

                                                                  1. 2

                                                                    https://github.com/rycee/home-manager/blob/05c93ff3ae13f1a2d90a279a890534cda7dc8ad6/modules/programs/fzf.nix#L124

                                                                    So, it should add fzf keybindings and completions. Do you also have programs.bash.enabled set to true so that home-manager gets to manage your bash configuration?

                                                                    1. 1

                                                                      programs.bash.enabled

                                                                      Ah, enabling that did the trick (no need to set initExtra). Thanks!

                                                                      I did however have to get rid of my existing bashrc/profile. Looks like I need to port that over to home-manager …

                                                                      1. 2

                                                                        Yeah, been there, done that. In the end it’s much nicer. Now when I install a new machine, I have everything set up with a single ‘home-manager switch’ :).

                                                              2. 3

                                                                I’ve always found bash’s ctrl+r to be hard to use properly, in comparison elvish’s history (and location) matching is like a mini-fzf, it’s very pleasant to use.

                                                                1. 1

                                                                  I think the idea here is that it shows you more than one line of the list at once, while C-r is sometimes a bit fiddly to get to exactly the right command if there are multiple matches.

                                                                  1. 1

                                                                    For zsh try «bindkey '^R' history-incremental-pattern-search-backward» in .zshrc. Now you can type e.g. «^Rpy*http» to find «python -m http.server 1234» in your history. Stil shows only one match, but it’s easier to find the right one.

                                                                    1. 1

                                                                      I use https://github.com/dvorka/hstr for history search on steroids and I am very happy with it.

                                                                    1. 1

                                                                      I would add that repeating yourself, in some cases, is also essential in performance critical software, not just a matter of avoiding wrong abstraction or avoiding an ugly architecture design. I’m writing a painting application (like Photoshop, Krita, GIMP), and I could use the DRY philosophy on the function to plot the brushes, but I would have a serious performance drop if I did that, because the if/else abstraction would happen in the middle of a rasterization:

                                                                      void plot(...)
                                                                      {
                                                                      	while (row < bottom) {
                                                                      		while (col < right) {
                                                                      			if (plot.hardness == 100) {
                                                                      				/* Use simple plot */
                                                                      			} else {
                                                                      				/* Use plot with smoothness */
                                                                      			}
                                                                      		}
                                                                      	}
                                                                      }
                                                                      

                                                                      Now, imagine this with multiple parameters (hardness, roughness, density, blending, …). Instead, I copy-paste the function and rewrite with the specific algorithm inside of the nested loop.

                                                                      1. 2

                                                                        You could also assign the drawing function to a function pointer (or lambda or whatever) outside of the loop and just call that within the loop. No branching in the loop, and no duplicated code.

                                                                        1. 3

                                                                          But an indirect call (e.g. calling into a function pointer or a vtable, etc) is still a “branch” - it’s just one with an unknown number of known targets instead of two, which only adds more variables to the equation.

                                                                          In order for this to be equivalent to inlining the duplicate-for-each-algorithm code, one would have to convince oneself that the indirect branch predictor on the processor is going to reliably guess the CALL and RET targets, that the calling convention doesn’t spill more registers than the inlined execution would (ideally it’s a leaf function so the compiler can elide call prologue/epilogues), and that the processor’s speculative execution system doesn’t have its memory dependency information invalidated by the presence of the call.

                                                                          Caveat - the above might be less true if you’re programming in a managed runtime - if that function call can be inlined by the JIT compiler at runtime (many high-performance runtimes are very aggressive about function inliing, so it’s not an unrealistic thing to expect), then hopefully the above issues would be lessened.

                                                                        2. 2

                                                                          If you get a chance, there’s a chapter in Beautiful Code that talks about runtime code generation for image processing that, IIRC, singles out stencling and plotting as a running example, that you might find relevant to your interests.

                                                                          1. 2

                                                                            Thanks, I will make sure to check it out!

                                                                        1. 11

                                                                          Git via mail can be nice but it’s very hard to get used to. It took me ages to set up git send-email correctly, and my problem in the end was that our local router blocked SMTP connections to non-whitelisted servers. This is just one way it can go wrong. I can inagine there are many more.

                                                                          And just another minor comment: Everyone knows that Git is decentralized (not federated, btw.), the issue is GitHub, ie the service that adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers,etc. A one-sided, technical perspective ignores all of these things as useless and unnecessary – falsely. Centralized platforms have a unfair benefit in this perspective, since there’s only one voice, one claim and no way to question it. One has to assume they make sure that the accounts are all real, and not spam-bots, otherwise nothing makes sense.

                                                                          To overcome this issue, is the big task. And Email, which is notoriously bad at any identity validation, might not be the best thing. To be fair, ActivityPub, currently isn’t either, but the though that different services and platforms could interoperate (and some of these might even support a email-interface) seems at the very least interesting to me.

                                                                          1. 13

                                                                            Article author here. As begriffs said, I propose email as the underlying means of federating web forges, as opposed to ActivityPub. The user experience is very similar and users who don’t know how to or don’t want to use git send-email don’t have to.

                                                                            Everyone knows that Git is decentralized (not federated, btw.)

                                                                            The point of this article is to show that git is federated. There are built in commands which federate git using email (a federated system) as the transport.

                                                                            GitHub, ie the service that adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers,etc. A one-sided, technical perspective ignores all of these things as useless and unnecessary – falsely

                                                                            Profiles can live on sr.ht, but on any sr.ht instance, or any instance of any other forge software which federates with email. A person would probably still have a single canonical place they live on, and a profile there which lists their forks of software from around the net. Commit stats are easily generated on such a platform as well. Fork counts and followers (stars?) I find much less interesting, they’re just ego stroking and should be discarded if technical constraints require.

                                                                            1. 4

                                                                              I don’t think that’s a strong argument in favor of git being federated. I don’t think it matters either.

                                                                              Git in of itself does not care about the transport. It does not care whether you use HTTP, git:// or email to bring your repo up to date. You can even use an USB stick.

                                                                              I’d say git is communication format agnostic, federation is all about standardizing communication. Using email with git is merely another way to pipe git I/O, git itself does not care.

                                                                              1. 2

                                                                                git send-email literally logs into SMTP and sends a patch with it.

                                                                                git am and git format-patch explicitly refer to mailboxes.

                                                                                Email is central to the development of Linux and git itself, the two projects git is designed for. Many git features are designed with email in mind.

                                                                                1. 4

                                                                                  Yes but ultimately both do not require nor care about federation itself.

                                                                                  send-email is IMO more of a utility function, git am or format-patch, which as you mention go to mailboxes, have nothing to do with email’s federated nature. Neither is SMTP tbh, atleast on the Client-Server side.

                                                                                  They’re convenience scripts that do the hard part of writing patches in mails for you, you can also just have your mailbox on a usb stick and transport it that way. And the SMTP doesn’t need to go elsewhere either.

                                                                                  I guess the best comparison is that this script is no more than a frontend for Mastodon. The Frontend of Mastodon isn’t federated either, Mastodon itself is. Federation is the server-to-server part. That’s the part we care about. But git doesn’t care about that.

                                                                                  1. 9

                                                                                    I see what you’re getting at. I have to concede that you are correct in a pedantic sense, but in a practical sense none of what you’re getting at matters. In a practical sense, git is federated via email.

                                                                                  2. 3

                                                                                    That various email utilities are included seems more like a consequence of email being the preferred workflow of git developers. I don’t see how that makes it the canonical workflow compared to pulling from remotes via http or ssh, git has native support for both after all.

                                                                                2. 1

                                                                                  I belive that @tscs37 already showed that Git is distributed, since all nodes are equal (no distinction between clinets and servers), while a git networks can be structured in a federated fashion, ir even in a centralized one. What the transport medium has to do with this is still unclear in my view.

                                                                                  Fork counts and followers (stars?) I find much less interesting, they’re just ego stroking and should be discarded if technical constraints require

                                                                                  That’s exactly my point. GitHub offers a unit-standard, and easily recognisable and readable (simply because everyone is used to it). This has a value and ultimately a relevance, that can’t just be ignored, even if this reason is nonsense. It would just be another example of technical naïve.

                                                                                  I’ve shown my sympathy for ideas like these before, ad I most certainly don’t want to make the impression of a GitHub apologist. All I want to remind people is that the Social aspect beyond necessity (builds, issue trackers, …) are all things one has to seriously consider and tackle when one is interested in offering an alternative to GitHub, with any serious ambitions.

                                                                                  1. 3

                                                                                    I don’t think sr.ht has to please everyone. People who want these meaningless social features will probably be happier on some other platform, while the veterans are getting work done.

                                                                                    1. 1

                                                                                      I’m fine with people using mailing-list oriented solutions (the elitism might be a bit off-putting, but never mind). I just don’t think that it’s that much better than the GitPub idea.

                                                                                      People who want these meaningless social features will probably be happier on some other platform, while the veterans are getting work done.

                                                                                      If having these so-called “meaningless social features” helps a project thrive, attract contributers and new users, I wouldn’t conciser these meaningless. But if that’s not what you are interested in, that’s just ok.

                                                                                3. 2

                                                                                  our local router blocked SMTP connections to non-whitelisted servers

                                                                                  The article says that sr.ht can optionally send the emails for you, no git send-mail required: “They’ll enter an email address (or addresses) to send the patch(es) to, and we’ll send it along on their behalf.”

                                                                                  Also what mail transfer agent were you pointing git send-mail at? You can have it send through gmail/fastmail/etc servers – would your router block that?

                                                                                  GitHub […] adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers

                                                                                  How about mirroring code on github to collect stars? Make it a read-only mirror by disabling issues and activating the pull request rejection bot. Git, Linux, and Postgres do this, and probably other projects do too.

                                                                                  Email […] is notoriously bad at any identity validation

                                                                                  Do SPF, DKIM and DMARC make this no longer true, or are there still ways to impersonate people?

                                                                                  1. 1

                                                                                    Also what mail transfer agent were you pointing git send-mail at?

                                                                                    Fastmail. Was too esoteric for the default settings of my router. And if it weren’t for support, I would have never guessed that that was the issue, since the whole interface is so alien to most people (just like the questions: did I send the right commits, is my message formatted correctly, etc.)

                                                                                    How about mirroring code on github to collect stars? Make it a read-only mirror by disabling issues and activating the pull request rejection bot. Git, Linux, and Postgres do this, and probably other projects do too.

                                                                                    I’m not saying it’s perfect (again, I’m no GitHub apologist) – my point is that it isn’t irrelevant!

                                                                                    Do SPF, DKIM and DMARC make this no longer true, or are there still ways to impersonate people?

                                                                                    Yes, if someone doesn’t use these things. And claiming “oh, but they just should” is again raising the entry barrier, which is just too high the way it already would be.

                                                                                    1. 1

                                                                                      Yes, if someone doesn’t use these things. And claiming “oh, but they just should” is again raising the entry barrier, which is just too high the way it already would be.

                                                                                      This doesn’t damn the whole idea, it just shows us where open areas of development are.

                                                                                1. 7

                                                                                  I still don’t get why sending patches via email is preferable over creating git remotes and pulling changes from them, the latter seems to fit better into git’s decentralized model. Even if you have scripts to create emails out of patches and apply them, it still seems clunky to try to recreate git’s state over text.

                                                                                  1. 8

                                                                                    The main advantage of email is that you can do code review over email, too - you can chop up the patch and reply to specific lines of code inline and carry on the discussion same way you would any other email thread.

                                                                                    For bigger changes, though, it doesn’t scale. For that, we have git request-pull:

                                                                                    https://www.git-scm.com/docs/git-request-pull

                                                                                    I intend to facilitate both approaches on sr.ht.

                                                                                    1. 3

                                                                                      Email simplifies things.

                                                                                      You don’t need to know git ;)

                                                                                    2. 4

                                                                                      Even if you have scripts to create emails out of patches and apply them, it still seems clunky to try to recreate git’s state over text.

                                                                                      Git was made with email workflows in mind from the start. To send email, git helps with git format-patch and git am can receive patches via email. The patch format, for that matter, was also made with email in mind.

                                                                                    1. 12

                                                                                      As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.

                                                                                      It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.

                                                                                      I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.

                                                                                      @Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.

                                                                                      1. 14

                                                                                        Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).

                                                                                        With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.

                                                                                        EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.

                                                                                        1. 9

                                                                                          Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.

                                                                                          For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.

                                                                                          1. 8

                                                                                            I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.

                                                                                            I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.

                                                                                            1. 2

                                                                                              For me, I like the ability to plan when I will solve a problem.

                                                                                              But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.

                                                                                              And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.

                                                                                              On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.

                                                                                              1. 2

                                                                                                I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.

                                                                                                And if an update break things, I can also roll back from that update until I have time to fix things.

                                                                                                Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.

                                                                                                1. 1

                                                                                                  I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.

                                                                                                  Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.

                                                                                                  Several people here said that Arch doesn’t really support rollback

                                                                                                  It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.

                                                                                                  1. 1

                                                                                                    It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.

                                                                                                    Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.

                                                                                                    Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).

                                                                                                    1. 1

                                                                                                      I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:

                                                                                                      $ sudo pacman -Syu
                                                                                                      ... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
                                                                                                      ... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
                                                                                                      $ ls /var/cache/pacman/pkg | rg postgres
                                                                                                      ... ah, postgresql-x.(y-1) is sitting right there
                                                                                                      $ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
                                                                                                      $ sudo systemctl start postgres
                                                                                                      ... it's alive!
                                                                                                      

                                                                                                      This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages

                                                                                                      My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.

                                                                                                      (Take my claims with a grain of salt. I am a mere pacman user, not an expert.)

                                                                                                      EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date

                                                                                        2. 12

                                                                                          now pacman Syu is almost guaranteed to break or change something for the worse

                                                                                          I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).

                                                                                          I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.

                                                                                          1. 3

                                                                                            I have the opposite experience

                                                                                            I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.

                                                                                            1. 2

                                                                                              I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.

                                                                                              I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.

                                                                                              1. 3

                                                                                                Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.

                                                                                                It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.

                                                                                              2. 1

                                                                                                That’s entirely possible.

                                                                                            2. 4

                                                                                              I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)

                                                                                              1. 1

                                                                                                Things like Nix even allow rolling back from almost all user configuration errors.

                                                                                                1. 3

                                                                                                  Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.

                                                                                              2. 3

                                                                                                How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.

                                                                                                I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.

                                                                                                As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.

                                                                                                1. 2

                                                                                                  How do you configure ZFS boot environments with Arch? Or do you just mean snapshots?

                                                                                                  1. 3

                                                                                                    I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.

                                                                                                    It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.

                                                                                                    1. 2

                                                                                                      Looks really useful. Might contribute a plugin for rEFInd at some point :-)

                                                                                                      1. 1

                                                                                                        Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.

                                                                                                        It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.

                                                                                              1. 15

                                                                                                I recently discovered how horribly complicated traditional init scripts are whilst using Alpine Linux. OpenRC might be modern, but it’s still complicated.

                                                                                                Runit seems to be the nicest I’ve come across. It asks the question “why do we need to do all of this anyway? What’s the point?”

                                                                                                It rejects the idea of forking and instead requires everything to run in the foreground:

                                                                                                /etc/sv/nginx/run:

                                                                                                #!/bin/sh
                                                                                                exec nginx -g 'daemon off;'
                                                                                                

                                                                                                /etc/sv/smbd/run

                                                                                                #!/bin/sh
                                                                                                mkdir -p /run/samba
                                                                                                exec smbd -F -S
                                                                                                

                                                                                                /etc/sv/murmur/run

                                                                                                #!/bin/sh
                                                                                                exec murmurd -ini /etc/murmur.ini -fg 2>&1
                                                                                                

                                                                                                Waiting for other services to load first does not require special features in the init system itself. Instead you can write the dependency directly into the service file in the form of a “start this service” request:

                                                                                                /etc/sv/cron/run

                                                                                                 #!/bin/sh
                                                                                                 sv start socklog-unix || exit 1
                                                                                                 exec cron -f
                                                                                                

                                                                                                Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.

                                                                                                The only other feature I can think of is “reloading” a service, which Aker does in the article via this line:

                                                                                                ExecReload=kill -HUP $MAINPID

                                                                                                I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?

                                                                                                1. 6

                                                                                                  Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.

                                                                                                  The logging mechanism works like this to be stable and only lose logs in case runsv and the log service would die. Another thing about separate logging services is that stdout/stderror are not necessarily tagged, adding all this stuff to runsv would just bloat it.

                                                                                                  There is definitively room for improvements as logger(1) is broken since some time in the way void uses it at the moment (You can blame systemd for that). My idea to simplify logging services to centralize the way how logging is done can be found here https://github.com/voidlinux/void-runit/pull/65. For me the ability to exec svlogd(8) from vlogger(8) to have a more lossless logging mechanism is more important than the main functionality of replacing logger(1).

                                                                                                  1. 1

                                                                                                    Ooh thankyou, having a look :)

                                                                                                  2. 6

                                                                                                    Instead you can write the dependency directly into the service file in the form of a “start this service” request

                                                                                                    But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order. Depending on network being setup, for example, brings complexity to each of those shell scripts.

                                                                                                    I’m of the opinion that a dsl of whitelisted items (systemd) is much nicer to handle than writing shell scripts, along with the standardized commands instead of having to know which services that accepts ‘reload’ vs ‘restart’ or some other variation in commands - those kind of niceties are gone when the shell scripts are individually an interface each.

                                                                                                    1. 6

                                                                                                      The runit/daemontools philosophy is to just keep trying until something finally runs. So if the order is wrong, presumably the service dies if a dependent service is not running, in which case it’ll just get restart. So eventually things progress towards a functioning state. IMO, given that a service needs to handle the services it depends on crashing at any time anyways to ensure correct behaviour, I don’t feel there is significant value in encoding this in an init system. A dependent service could also be moved to running on another machine which this would not work in as well.

                                                                                                      1. 3

                                                                                                        It’s the same philosophy as network-level dependencies. A web app that depends on a mail service for some operations is not going to shutdown or wait to boot if the mail service is down. Each dependency should have a tunable retry logic, usually with an exponential backoff.

                                                                                                      2. 4

                                                                                                        But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order.

                                                                                                        That was my initial thought, but it turns out the opposite is true. The services are retried until they work. Things are definitely paralleled – there is not “exit” in these scripts, so there is no physical way of running them in a linear (non-parallel) nature.

                                                                                                        Ignoring the theory: void’s runit provides the second fastest init boot I’ve ever had. The only thing that beats it is a custom init I wrote, but that was very hardware (ARM Chromebook) and user specific.

                                                                                                      3. 5

                                                                                                        Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services.

                                                                                                        runit and s6 also don’t support cgroups, which can be very useful.

                                                                                                        1. 5

                                                                                                          Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services

                                                                                                          Why? The runit/daemontools philsophy is just to try to keep something running forever, so if something dies, just restart it. If one restarts a service, than either those that depend on it will die or they will handle it fine and continue with their life.

                                                                                                          1. 4

                                                                                                            either those that depend on it will die or they will handle it fine

                                                                                                            If they die, and are configured to restart, they will keep bouncing up and down while the dependency is down? I think having dependency resolution is definitely better than that. Restart the dependency, then the dependent.

                                                                                                            1. 4

                                                                                                              Yes they will. But what’s wrong with that?

                                                                                                              1. 2

                                                                                                                Wasted cycles, wasted time, not nearly as clean?

                                                                                                                1. 10

                                                                                                                  It’s a computer, it’s meant to do dumb things over and over again. And presumably that faulty component will be fixed pretty quickly anyways, right?

                                                                                                                  1. 5

                                                                                                                    It’s a computer, it’s meant to do dumb things over and over again

                                                                                                                    I would rather have my computer do less dumb things over and over personally.

                                                                                                                    And presumably that faulty component will be fixed pretty quickly anyways, right?

                                                                                                                    Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever). The dependency tree can be complicated. Ideally once something is fixed everything that depends on it can restart immediately, rather than waiting for the next automatic attempt which could (with the exponential backoff that proponents typically propose) take quite a while. And personally I’d rather have my logs show only a single failure rather than several for one incident.

                                                                                                                    But, there are merits to having a super-simple system too, I can see that. It depends on your needs and preferences. I think both ways of handling things are valid; I prefer dependency management, but I’m not a fan of Systemd.

                                                                                                                    1. 4

                                                                                                                      I would rather have my computer do less dumb things over and over personally.

                                                                                                                      Why, though? What’s the technical argument. daemontools (and I assume runit) do sleep 1 second between retries, which for a computer is basically equivalent to it being entirely idle. It seems to me that a lot of people just get a bad feeling about running something that will immediately crash.

                                                                                                                      Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever).

                                                                                                                      What’s the distinction here? Also, with microservices the dependency graph in the init system almost certainly doesn’t represent the dependency graph of the microservice as it’s likely talking to services on other machines.

                                                                                                                      I think both ways of handling things are valid

                                                                                                                      Yeah, I cannot provide an objective argument as to why one should prefer one to the other. I do think this is a nice little example of the slow creep of complexity in systems. Adding a pinch of dependency management here because it feels right, and a teaspoon of plugin system there because we want things to be extensible, and a deciliter of proxies everywhere because of microservices. I think it’s worth taking a moment every now and again and stepping back and considering where we want to spend our complexity budget. I, personally, don’t want to spend it on the init system so I like the simple approach here (especially since with microservies the init dependency graph doesn’t reflect the reality of the service anymore). But as you point out, positions may vary.

                                                                                                                      1. 2

                                                                                                                        Why, though? What’s the technical argument

                                                                                                                        Unnecessary wakeup, power use (especially for a laptop), noise in the logs from restarts that were always bound to fail, unnecessary delay before restart when restart actually does become possible. None of these arguments are particularly strong, but they’re not completely invalid either.

                                                                                                                        We’re not necessarily just talking about standard daemons …

                                                                                                                        What’s the distinction here?

                                                                                                                        I was trying to point out that we shouldn’t make too many generalisations about how services might behave when they have a dependency missing, nor assume that it is always ok just to let them fail (edit:) or that they will be easy to fix. There could be exceptions.

                                                                                                                    2. 2

                                                                                                                      Perhaps wandering off topic, but this is a good way to trigger even worse cascade failures.

                                                                                                                      eg, an RSS reader that falls back to polling every second if it gets something other than 200. I retire a URL, and now a million clients start pounding my server with a flood of traffic.

                                                                                                                      There are a number of local services (time, dns) which probably make some noise upon startup. It may not annoy you to have one computer misbehave, but the recipient of that noise may disagree.

                                                                                                                      In short, dumb systems are irresponsible.

                                                                                                                      1. 2

                                                                                                                        But what is someone supposed to do? I cannot force a million people using my RSS tool not to retry every second on failure. This is just the reality of running services. Not to mention all the other issues that come up with not being in a controlled environment and running something loose on the internet such as being DDoS’d.

                                                                                                                        1. 2

                                                                                                                          I think you are responsible if you are the one who puts the dumb loop in your code. If end users do something dumb, then that’s on them, but especially, especially, for failure cases where the user may not know or observe what happens until it’s too late, do not ship dangerous defaults. Most users will not change them.

                                                                                                                          1. 1

                                                                                                                            In this case we’re talking about init systems like daemontools and runit. I’m having trouble connecting what you’re saying to that.

                                                                                                                    3. 2

                                                                                                                      If those thing bother you, why run Linux at all? :P

                                                                                                                  2. 2

                                                                                                                    N.B. bouncing up and down ~= polling. Polling always intrinsically seems inferior to event based systems, but in practice much of your computer runs on polling perfectly fine and doesn’t eat your CPU. Example: USB keyboards and mice.

                                                                                                                    1. 2

                                                                                                                      USB keyboard/mouse polling doesn’t eat CPU because it isn’t done by the CPU. IIUC the USB controller generates an interrupt when data is received. I feel like this analogy isn’t a good one (regardless). Checking a USB device for a few bytes of data is nothing like (for example) starting a Java VM to host a web service which takes some time to read its config and load its caches only to then fall over because some dependency isn’t running.

                                                                                                                    2. 1

                                                                                                                      Sleep 1 and restart is the default. It is possible to have another behavior by adding a ./finish script to the ./run script.

                                                                                                                  3. 2

                                                                                                                    I really like runit on void. I do like the simplicity of SystemD target files from a package manager perspective, but I don’t like how systemd tries to do everything (consolekit/logind, mounting, xinet, etc.)

                                                                                                                    I wish it just did services and dependencies. Then it’d be easier to write other systemd implementations, with better tooling (I’m not a fan of systemctl or journalctl’s interfaces).

                                                                                                                    1. 1

                                                                                                                      You might like my own dinit (https://github.com/davmac314/dinit). It somewhat aims for that - handle services and dependencies, leave everything else to the pre-existing toolchain. It’s not quite finished but it’s becoming quite usable and I’ve been booting my system with it for some time now.

                                                                                                                  4. 4

                                                                                                                    I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?

                                                                                                                    It’s nice to be able to reload a well-written service without having to look up what mechanism it offers, if any.

                                                                                                                    1. 5

                                                                                                                      Runits sv(8) has the reload command which sends SIGHUP by default. The default behavior (for each control command) can be changed in runit by creating a small script under $service_name/control/$control_code.

                                                                                                                      https://man.voidlinux.eu/runsv#CUSTOMIZE_CONTROL

                                                                                                                      1. 1

                                                                                                                        I was thinking of the difference between ‘restart’ and ‘reload’.

                                                                                                                        Reload is only useful when:

                                                                                                                        • You can’t afford to lose a few seconds of service uptime (OR the service is ridiculously slow to load)
                                                                                                                        • AND the daemon supports an on-line reload functionality.

                                                                                                                        I have not been in environments where this is necessary, restart has always done me well. I assume that the primary use cases are high-uptime webservers and databases.

                                                                                                                        My thoughts were along the lines o: If you’re running a high-uptime service, you probably don’t care about the extra effort of writing ‘killall -HUP nginx’ than ‘systemctl reload nginx’. In fact I’d prefer to do that than take the risk of the init system re-interpreting a reload to be something else, like reloading other services too, and bringing down my uptime.

                                                                                                                      2. 3

                                                                                                                        I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.

                                                                                                                        I used to use something like logexec for that, to “wrap” the program inside the runit script, and send output to syslog. I agree it would be nice if it were builtin.

                                                                                                                      1. 1

                                                                                                                        I used to use pass, but I never liked gpg nor the leaking of the metadata. So I wrote a simple symmetric file encryption tool (based on monocypher, with argon2 password hashing) and I now store my passwords in a single file where even lines are sites, and odd lines the matching password. I keep this password file in my (public) dotfiles for more convenience too.

                                                                                                                        1. 1

                                                                                                                          What a pile of ugly hacks. The only real problem I see is in sharing clipboard to/from ssh sessions, the rest is just self-inflicted by using programs with poor design. I could understand why a text editor would keep its own clipboard, but I don’t see a good reason why tmux and zsh need to have one as well.

                                                                                                                          Also the clipboard attack (and defense) seems overblown. Either you trust the source and you can paste code happily, or you don’t and then you better be extremely careful about what you’re executing from them. Bracketed paste is just not going to cut it, it’s trivial to hide malicious commands in a shell script.

                                                                                                                          1. 7

                                                                                                                            What a pile of ugly hacks.

                                                                                                                            Isn’t this modern computing in a nutshell? ;)

                                                                                                                            The “poor design” is a historical artifact of using terminals. tmux and zsh have their own because there is no guarantee they are going to be used in an integrated environment: they may be that integrated environment. (Consider: running a machine without a GUI at all.)

                                                                                                                          1. 13

                                                                                                                            Some of the ‘alternatives’ are a bit more iffy than others. For any service that you don’t have the source to or can’t self-host (telegram, protonmail, duckduckgo, mega, macOS, siri to name a few), you’re essentially trusting them to uphold their privacy policy and to respect your data (now, but also hopefully in the future).

                                                                                                                            And in some cases it seems to me that it’s little more than fancy marketing capitalizing on privacy-conscious users.

                                                                                                                            1. 18

                                                                                                                              Telegram group messages aren’t even e2e encrypted, Telegram has access to full message content. The only thing Telegram is good at is marketing, because they’ve somehow convinced people they’re a secure messenger.

                                                                                                                              1. 6

                                                                                                                                To be fair, they at least had the following going for them:

                                                                                                                                • no need to use a phone client, as compared to WhatsApp which deletes your account if you access it with an unofficial client. You can just buy a pay-as-you-go SIM card and receive your PIN with a normal cell-phone
                                                                                                                                • they had an option for e2e encrypted chats, with self deleting messages (there was this whole fuss with the creator offering a million dollars (?) if anyone could find a loophole)
                                                                                                                                • their clients were open source, and anyone could implement their API

                                                                                                                                Maybe there was more, but these were the arguments I could think of on the spot. I agree that it isn’t enough, but it’s not like their claim was unsubstantiated. It just so happened that other services started adopting some of Telegrams features, making them loose their edge over the competition.

                                                                                                                                1. 4

                                                                                                                                  Also the client UX is pretty solid imho. Bells and whistles are not too intrusive, and stuff works as you’d expect.

                                                                                                                                  Regarding its security: It is discussed in the FAQ what security models they offer in which chat mode.

                                                                                                                                2. 6

                                                                                                                                  I’m much less worried about the source code than I am the incentives of the organization behind the software. YMMV, of course.

                                                                                                                                  1. 2

                                                                                                                                    Even if you have source code, it’s difficult to verify a service or piece of software (binary) matches that source code.

                                                                                                                                    1. 2

                                                                                                                                      Yes, but then if anything feels wrong, it gets possible to find an alternative provider for the same software.

                                                                                                                                      Still… Hard to beat the privacy of a hard drive at home accessed through SFTP.

                                                                                                                                    2. 2

                                                                                                                                      I was checking email SaaS providers last weekend as the privacy policy changes at current provider urge me not to renew my subscription when it ends. I have found mostly the same offers, and to be honest neither seemed convincing to me.

                                                                                                                                      For example the Tutanota offer seemed questionable: They keep me so secured that the email account can only be accessed by their email client, no free/open protocol is available. Only their mail client can be used, they use proprietary encryption scheme for my own benefit… OK, it is open sourced, but come on… I cannot export my data in a meaningful way to change providers. So what kind of encryption scheme is it? It is RSA-2048+AES, not using GPG/PGP “standards”, and is hosted in Germany, pretty much a surveillance state… This makes their claims questionable at least.

                                                                                                                                    1. 4

                                                                                                                                      I know this thread is already filled with alternative workflows for this particular task, but this is one of the things I love the most about elvish. With ctrl-l you can bring up an fzf-like ‘location’ menu that contains the directories you’ve been to, sorted by score (similar to firefox’s frecency, which I love). The best part is that it’s essentially ‘zero-conf’, since you just have to use the shell to build the results, and in my experience it works very well.

                                                                                                                                      Some will say that this is outside the scope of a shell, but it’s hard to reach this level of integration by combining external tools.

                                                                                                                                      1. 2

                                                                                                                                        Elvish is my favorite shell at the moment, and its surprisingly efficient directory navigation is only one of the reasons. A weighted-directory-history is kept automatically by the shell, so over time, your most-used directories float to the top, and are easily accessible in location mode. In this sense it’s not too different from AutoJump, but because it’s visual, you can see your selection live as you type. These days, it doesn’t take more than Alt-L (I have remapped from the default Ctrl-L that @nomto mentions) and a couple of keystrokes to select the directory I want. It works great.