1. 2

    In what other languages would it be possible?

    I guess everything with properties (functions disguised as fields) so D, C#, etc.

    Afaik not with C, C++, or Java.

    1. 26
      #define a (++i)
      int i = 0;
      
      if (a == 1 && a == 2 && a == 3)
          ....
      
      1. 1

        Isn’t that undefined behavior? Or is && a sequence point?

        1. 3

          && and || are sequence points. The right expression may never happen depending on the result of the left, so it would make things interesting if they weren’t.

      2. 10

        This is very easy to do in C++.

        1. 5

          You can also do it with Haskell.

          1. 3

            Doable with Java (override the equals method), and as an extension, with Clojure too:

            (deftype Anything []
              Object
              (equals [a b] true))
            
            (let [a (Anything.)]
              (when (and (= a 1) (= a 2) (= a 3))
                (println "Hello world!")))
            

            Try it!

            Or, inspired by @zge above:

            (let [== (fn [& _] true)
                  a 1]
              (and (== a 1) (== a 2) (== a 3)))
            
            1. 3

              Sort of. In Java, == doesn’t call the equals method, it just does a comparison for identity. So

               a.equals(1) && a.equals(2) && a.equals(3); 
              

              can be true, but never

               a == 1 && a == 2 && a == 3;
              
            2. 3

              perl can do it very simply

              my $i = 0;
              sub a {
              	return ++$i;
              }
              
              if (a == 1 && a == 2 && a == 3) {
              	print("true\n");
              }
              
              1. 2

                Here is a C# version.

                using System;
                
                namespace ContrivedExample
                {
                    public sealed class Miscreant
                    {
                        public static implicit operator Miscreant(int i) => new Miscreant();
                
                        public static bool operator ==(Miscreant left, Miscreant right) => true;
                
                        public static bool operator !=(Miscreant left, Miscreant right) => false;
                    }
                
                    internal static class Program
                    {
                        private static void Main(string[] args)
                        {
                            var a = new Miscreant();
                            bool broken = a == 1 && a == 2 && a == 3;
                            Console.WriteLine(broken);
                        }
                    }
                }
                
                1. 2

                  One of the ‘tricks’ where all a’s are different Unicode characters is possible with Python and Ruby. Probably in Golang too.

                  1. 7

                    In python, you can simply create class with __eq__ method and do whatever you want.

                    1. 4

                      Likewise in ruby, trivial to implement

                      a = Class.new do
                        def ==(*)
                          true
                        end
                      end.new
                      
                      a == 1 # => true
                      a == 2 # => true
                      a == 3 # => true
                      
                  2. 2

                    In Scheme you could either take the lazy route and do (note the invariance of the order or ammount of the operations):

                    (let ((= (lambda (a b) #t))
                           (a 1))
                      (if (or (= 1 a) (= 2 a) (= 3 a))
                          "take that Aristotle!"))
                    

                    Or be more creative, and say

                    (let ((= (lambda (x _) (or (map (lambda (n) (= x n)) '(1 2 3)))))
                            (a 1))
                        (if (or (= 1 a) (= 2 a) (= 3 a))
                            "take that Aristotle!"))
                    

                    if you would want = to only mean “is equal to one, two or three”, instead of everything is “everything is equal”, of course only within this let block. The same could also be done with eq?, obviously.

                    1. 1

                      Here is a Swift version that uses side effects in the definition of the == operator.

                      import Foundation
                      
                      internal final class Miscreant {
                          private var value = 0
                          public static func ==(lhs: Miscreant, rhs: Int) -> Bool {
                              lhs.value += 1
                              return lhs.value == rhs
                          }
                      }
                      
                      let a = Miscreant()
                      print(a == 1 && a == 2 && a == 3)
                      
                    1. 5

                      Oh boy, Devuan. The gift that keeps on giving. The importance of it is much better summarised here.

                      There are many things one can criticize systemd of, but if you assert that a whole bunch of fragile shell scripts were any better, less error prone, or easier to debug, you are so very wrong. There are so many reasons to bash systemd, but saying it is more fragile and harder to debug than the shell scripts is not one of those. If that’s the central part of your rant against systemd, I can’t take it seriously, I’m sorry.

                      If you think systemd disabling a service that restarts way too often is bad, you clearly haven’t seen runaway processes that keep crashing, and the constant restarts trashing the system. I had, and I’m glad I have guards that make this easier to avoid.

                      1. 4

                        I definitely prefer writing unit files than good ol’ sysv init scripts. Openrc init script are slighly better, but still too complex. IMO, systemd took another approach (a declarative one) at writing init script. To me, it just add a new bad solution to the mess init scripts are.

                        My issues with systemd is the lockin it brings. Be it on the tools you have to use, the dependencies of your systemd, or even on the actions you can take.

                        Yes it’s good to have a service disable itself if it fails to start too often (for example, when httpd fails to bind on an unexisting IP).
                        But there is nothing more frustrating than being hopeless trying to start it manually once you’ve fixed the issue.

                        Systemd aims to be the smartest init systemd out there, and instead of accomodating itself to different (all being valid!), it enforces its “standard” way.

                        For example, I’ve been experimenting with LXC recently. This tech has been there for quite some time now, and even share the same roots (cgroups) as systemd.
                        After some testing, and leaving the containers running for a few days, I discovered that the CPU, memory and diskio cgroups for my containers were simply removed, meaning my containers were now running without any limitation in terms of resources! I found out later that systemd “cleans” cgroups that are not created by itself, unless you start your tool as a service file with the “Delegate=yes” attribute (see ControlGroupInterface).

                        So the only way to avoid systemd is to use it. Awesome.

                        There are many other issues with its model, and I personally think the amount problems it brings outcome its benefits.

                      1. 3

                        @algernon@trunk.mad-scientist.club. Even left Twitter in favour of Masto, and I’m only posting there nowadays. Mostly mechanical keyboards, their firmware, and parenting stuff. With an odd rant here and there.

                        1. 2

                          (I should probably mention that I run @kbp@trunk.mad-scientist.club too, where I post keyboard (& keyboard related) pictures from time to time.)

                        1. 1

                          Is it realistic to build a phone and software with that money? Many have failed with more resources

                          1. 4

                            They have built laptops with much less crowdfunded, so there’s a reasonable chance that they will ship the Librem 5 too. I imagine crowd funding is not the only source of money in their budget (they did spend a year and a half on research prior to launching the campaign, after all).

                            1. 2

                              Yeah, something definitely isn’t adding up. The $4M and $8M stretch goals seem like snakeoil too.

                              1. 2

                                I should say it should be possible, for the reasons Algernon put forth. For reference, these were the targets of previous Fairphone crowdfunders, which both succesfully reached production and had follow-up batches made. They also both included a degree of OS development (Android customization) – Fairphone OS’s home screen / launcher is still the nicest I have used to date, and not just because it’s Google-free.

                                • 2012 (Fairphone 1): 1.5 million € (5000 pre-orders at 325 € each). Selling 5000 phones was enough for them to start producing their first batch of 20 000 phones.
                                • 2015 (Fairphone 2): 7.9 million € (15000 pre-orders at 529 € each). Not sure what the production batch size was. Cost breakdown.

                                (Edits: formatting)

                                1. 0

                                  It depends. Will they have a brick with a capacitive touch screen and some modems in it, that runs on battery power? Almost certainly yes. Will it be something that anybody wants (for some reasonable definition of “anybody”)? Almost certainly not.

                                1. 8

                                  Distributed hash tables are great, but they solve the easy problem - distributing static content. We need to figure out how to decentralise dynamic sites and their databases over unsecure links and to potentially malicious nodes.

                                  Oh, and we want to update them too, but not have to run a single point of failure server in order to do so.

                                  1. 6

                                    Distributed hash tables are great, but they solve the easy problem - distributing static content. We need to figure out how to decentralise dynamic sites and their databases …

                                    There’s an easy trick though: Your DHT works because (in pseudo-erlang):

                                    on_dht_store({K, V}) ->
                                      if K == crypto:hash(sha256, V) -> …
                                    

                                    however if you also implement:

                                    on_dht_store({K, {S, V}}) ->
                                      if crypto:verify(ecdsa, sha256, V, S, K) -> …
                                    

                                    then you have a DHT that can store dynamic content which is addressed using a public/private keypair: Simply sign all of your updates.

                                    … over unsecure links and to potentially malicious nodes.

                                    Actually we don’t. The DHT “verifies” updates even if it accepts them from anywhere. There’s no need for something insecure like SSL/TLS in such a system because the name of objects is verified by its content. Malicious nodes can waste bandwidth (but can anyway), but they can’t corrupt the DHT.

                                    1. 2

                                      I think what’s meant by dynamic here is sites like, say Facebook, where what you see is rendered by a server specifically for your request. If we’re doing away with central servers, we should probably find a way to distribute their processing (and databases etc.) as well.

                                      1. 1

                                        one idea is to do computing client-side, where the use of a social network would include running some software that integrates static content signed by content producers. is there anything that would require a more complex approach?

                                        1. 2

                                          A shopping cart, and shopping in general comes to mind.

                                          1. 1

                                            Client simply publishes an order file on the DHT that is signed and encrypted to the store.

                                            We have been ordering things with PGP and email forever.

                                            1. 1

                                              that’s a good point, shopping is a special case of secure synchronous communication, which i agree is not well handled by distributed static content. the shopping cart could be kept track of client-side, but to place an order you’d need to establish a secure communication channel with the shop, just as you would if chatting with a friend. then your client would send a message to the store with the necessary order/payment information. but that seems to me like a simpler approach than thinking of it as distributing dynamic web pages.

                                              do we need any more primitives beyond (a) distributed signed static content and (b) secure connections for synchronous communication?

                                              1. 1

                                                There’s nothing about shopping that requires synchronous communication. People have been buying via mail for centuries.

                                            2. 1

                                              In general, how do you handle private data? Where does the client store my private messages? Even worse, who handles private messages arriving at me when my client isn’t online?

                                              1. 4

                                                Private messages are not an issue. In the simplest scheme, the message is encrypted so only you can read it. Then it is distributed to the network CAS/DHT/whatever just like any other data, where your client will find it once online.

                                                If you care about the details of a more elaborate scheme, check out freemail: https://github.com/freenet/plugin-Freemail/blob/master/docs/spec/spec.tex

                                                1. 2

                                                  i guess for email you’d still need some way to check for messages that are sent to you; you’d have to know where to look to get the message.

                                                  1. 3

                                                    That’s exactly what the signed, key-addressed storage that geocar proposed above solves.

                                                    The rest is implementation details. I.e. how exactly do people agree on the keys that will be inserted into by the people who want to send someone a message. In Freenet there’s a web of trust of identities that publish information and “puzzles” that are used to derive keys. Ultimately the recipient must poll for updates to these keys.

                                                    It’s not an unsolved problem; freemail, fms, and freetalk have been around for years.

                                                    1. 1

                                                      interesting. how do you think ipfs fits in with freenet? do they seek to solve the same problem? if ipfs is trying to solve a subset of freenet’s goals, why not just use what freenet already has?

                                          2. 2

                                            then you have a DHT that can store dynamic content which is addressed using a public/private keypair: Simply sign all of your updates.

                                            Now implement a Wikipedia clone using that. The distributed storage and the client side javascript are not enough to implement proper dynamic sites. We need distributed databases with schema versioning, access control, consistency guarantees, etc. It’s not something you can fake with a DHT.

                                            1. 1

                                              Now implement a Wikipedia clone using that.

                                              It’s been done.

                                              magnet:?xt=urn:btih:1379652cf48c825d71dd4a4d9c539f0268e82778

                                              1. 1

                                                Now implement a Wikipedia clone using that.

                                                It’s been done.

                                                magnet:?xt=urn:btih:1379652cf48c825d71dd4a4d9c539f0268e82778

                                                Is this supposed to be a joke? That’s just an XML dump of the most recent version of the actual English Wikipedia articles: https://torrentz2.eu/1379652cf48c825d71dd4a4d9c539f0268e82778

                                                1. 1

                                                  Whose index (I’ve demonstrated) can be stored on a DHT.

                                                  There’s nothing being faked here.

                                                  1. 0

                                                    Whose index (I’ve demonstrated) can be stored on a DHT.

                                                    Do you not understand the difference between a static dump and a dynamic site that allows collaborative editing?

                                                    1. 1

                                                      Do you?

                                                      Imagine Wikipedia in the year 10k spanning star systems: TCP is no longer an option. How does my distant descendant make a change?

                                                      The most obvious way is that they write a change proposal, perhaps as a structured JSON object, digitally signs it (e.g. with PGP), and then publishes the proposal. This is easy because everyone has the private keys for Wikipedia10k.

                                                      Any Wikipedia “reader” thus polls the DHT for items marked as proposal, and while there’s a lot of guff out there, any eventual consistency algorithm they like can be used to “locally” reconcile what their potential Wikipedia can look like.

                                                      Perhaps the missing trick is that in 2017, people still remember what it was like to have limited storage and so there’s an impulse to reconcile early to save those precious spinning platters of rust.

                                                      1. 1

                                                        Imagine Wikipedia in the year 10k spanning star systems

                                                        Imagine wanting to replace the Internet with a system that only supports static assets in a distributed key-value storage.

                                                        Any Wikipedia “reader” thus polls the DHT for items marked as proposal, and while there’s a lot of guff out there, any eventual consistency algorithm they like can be used to “locally” reconcile what their potential Wikipedia can look like.

                                                        Yeah, good luck serving millions of diffs per page and letting the end user figure out which ones should be assembled based on cryptographic signing and timestamps.

                                                        1. 1

                                                          Imagine wanting to replace the Internet with a system that only supports static assets in a distributed key-value storage.

                                                          There’s only static assets; HTTP over TCP only sends static assets: You put a static request someplace and you get a static response something else. A DHT is no different in this respect. Client publishes their request, server publishes their response.

                                                          The cool thing about a DHT is that it’s much more durable than TCP.

                                                          Yeah, good luck serving millions of diffs per page and letting the end user figure out which ones should be assembled based on cryptographic signing and timestamps.

                                                          My laptop is fast enough to do it for a “small” site like Wikipedia which deals with only about ten updates per second – a quick check shows I can cpu-hash about 2GB per minute cold so that means each “page” could be around 3MB. If I’m only interested in part of Wikipedia (for example, the English-language pages) then it’s even less work.

                                                          In the future computing power will be cheap enough to make any current limitations that actually exist, moot.

                                                          1. 0

                                                            In the future computing power will be cheap enough to make any current limitations that actually exist, moot.

                                                            This reminds me of http://wiki.c2.com/?SufficientlySmartCompiler

                                                            Design for the technology at hand, not for your dreams of what the future might bring.

                                          3. 4

                                            Thats a good point, though in the meantime, IPFS could be a great solution for the various static website generators. I think I’ll try making my hakyll website accessible on IPFS.

                                          1. 1

                                            Perhaps its just me, but following the link leads to a page requiring login.

                                            1. 1

                                              It might be your computer, I am not prompted to log in. Here’s what the Apache Software Foundation’s legal page says:

                                              Rocks DB License The Rocks DB license includes a specification of a PATENTS file that passes along risk to downstream consumers of our software imbalanced in favor of the licensor, not the licensee, thereby violating our Apache legal policy of being a universal donor. The terms of Rocks DB license license are not a subset of those found in the ALv2, and they cannot be sublicensed as ALv2.
                                              Facebook BSD+Patents license
                                              The Facebook BSD+Patents License is the more industry standard term for the Rocks DB license and its variants. The same conditions for Rocks DB apply to the use of the Facebook BSD+Patents license in ASF products.

                                              [Edited to remove the excerpt from the parent post’s JIRA page – I only quoted half of the letter, and om second thoughts I’m not sure whether the quote succeeded in capturing the essence, or was instead incomplete in an unhelpful or misdirecting way. Don’t have the time right now to do this right, either. Sorry.]

                                            1. 9

                                              Only a well-trained ear might be able to hear the difference between a generic keyboard and the IBM Model F keyboard that was popular in the 1980s.

                                              The first sentence clearly reveals that the author has never heard a Model F.

                                              1. 5

                                                For those who don’t know the difference, this review has a side-by-side comparison, starting at around 5:28.

                                              1. 4

                                                This is a gem, described originally as “The translator pretends to be a Solaris audio device and acts as a rump kernel client converting I/O to the NetBSD audio device.”

                                                This is one twisted, yet clever hack.

                                                1. 10

                                                  I’ve read many people say that dvorak was fine for the vim movement keys.

                                                  And as for the keycaps, I’m not sure I see the problem, why not just use a blank keyboard and switch at will?

                                                  1. 5

                                                    Although I am in theory capable of typing without looking at the keys, in practice I do a lot of key stabbing as well. And a lot of one handed typing as well. I’ve practiced this some in the dark, and it’s no fun. Definitely not interested in a blank keyboard.

                                                    Anyway, same experience as the author. Learned dvorak because there were people who didn’t know dvorak, used it for a while, then found I had trouble using a qwerty keyboard. Now I just use qwerty full time, but go back and practice dvorak for a week or so at a time to maintain the skill in case I ever have a compelling reason to switch.

                                                    I like dvorak for English, but find it substantially more annoying for code. And it’s a disaster for passwords. I usually set up hotkeys so I can quickly change on the fly depending on what and how much I’m typing.

                                                    1. 2

                                                      I love Dvorak for code! Having -_ and =+ much closer is so convenient.

                                                      1. 1

                                                        More than { [ ] }?

                                                        1. 2

                                                          For sure, think about where it’s now positioned. Typing …) {… is so easy when ) and { are side by side. And for code that doesn’t use egyptian braces, )<enter>{ is easier for me too. When I hit enter with my pinky, and follow up with { with my middle finger, that’s natural. But trying to squeeze my middle finger into the QWERTY location for { while my pinky is still on enter totally sucks.

                                                          Meanwhile -_=+ are all typed in line with other words (i.e. variable names). And - and _ are frequently part of filenames and variables, so it’s great that they’re closest to the letter keys.

                                                      2. 2

                                                        I like dvorak for English, but find it substantially more annoying for code.

                                                        Exactly! If I were a novelist I would probably just continue using Dvorak.

                                                        1. 2

                                                          in practice I do a lot of key stabbing as well

                                                          I recently bought a laptop with a Swiss(?) keyboard layout. (It really is a monstrosity with up to five characters on one key). I thought I wouldn’t need to look at the keys at all and I could just use my preferred keymap, but I’ve been caught ought a few times. I’m just about used to it now, though.

                                                        2. 4

                                                          When I am typing commands into a production machine I feel like it is only responsible of me to use a properly labelled keyboard.

                                                          This is really important when you’re on your last ssh password/smartcard PIN attempt, because you can go slow and look at what you’re doing.

                                                          1. 5

                                                            I got a blank keyboard, and I must admit that I still look at it from time to time. like for numbers, or b/v, u/i… I only do so when I start thinking “OMG this is a password, don’t get it wrong!”

                                                            Having a blank keyboard doesn’t stop you from looking at your hands. It only disappoint you when you do.

                                                            1. 5

                                                              As a happy Dvorak user I’d have to say there are better fixes to that problem. Copy it from your password manager? (You use one, right?) Type it into somewhere else, and cut and paste? Or use the keyboard viewer? (Ok that one is macOS specific, perhaps.)

                                                              Specifically re: “typing commands into prod machines” I don’t buy the argument. Commands generally don’t take effect until you hit ENTER and until then you’ve got all the time you need to review what you’ve typed in. Some programs do prompt for yes/no without waiting for Enter but it’s not like Dvorak or Qwerty’s y or n keys have a common location in either layout, so I don’t really see that as an issue either.

                                                              1. 2

                                                                Yes, the “production machines” argument is a strange one. I’d imagine it would only be an issue on a Windows system (if you’re logging in via ssh it’s immaterial) and then it would be fairly obvious quite quickly that the keyboard map is wrong. And if the keyboard map is wrong in the Dvorak vs QWERTY sense you’d quickly realise you’re typing gibberish. Or so I’d think?

                                                                Ignoring the whole issue of “you shouldn’t be logging in to a production machine to make changes”…

                                                              2. 1

                                                                In this case, I find the homing keys, reorient myself, and type whatever I need to type. (Or just use a password manager & paste). Haven’t mistyped a password in years, and I’m using Dvorak with blanks.

                                                                Homing keys are there for a reason.

                                                                Labels are only necessary when you don’t touch type. If you do, they serve no useful purpose.

                                                              3. 2

                                                                I’ve read many people say that dvorak was fine for the vim movement keys.

                                                                Dvorak is fine for Vim movement keys, but not as nearly as nice as Qwerty.

                                                                And as for the keycaps, I’m not sure I see the problem, why not just use a blank keyboard and switch at will?

                                                                The problem is, when I’m entering a password or bash command sometimes I want to slow down and actually look at the keyboard while I’m typing. In sensitive production settings raw speed isn’t nearly as valuable as accuracy. A blank keyboard would not solve this problem :)

                                                                1. 6

                                                                  Dvorak is fine for Vim movement keys, but not as nearly as nice as Qwerty.

                                                                  They actually work better with Dvorak for me, because the grouping feels more logical than on qwerty to me.

                                                                  1. 1

                                                                    Likewise: vertical and horizontal movement keys separated onto different hands rather than all on the one (and interspersed) works much better for me.

                                                                  2. 2

                                                                    I hate vim movement in QWERTY. I think it’s because I’m left handed, and Dvorak puts up/down on my left pointer and middle finger. For me, it’s really hard to manipulate all four directions with my right hand quickly.

                                                                    1. 1

                                                                      Would it make sense to use AOEU for motion then (or HTNS for right handed people)? I guess doing so may open a whole can of remapping worms though?

                                                                      That won’t help with apps that don’t support remapping but which support vi-style motion though (as they’ll expect you to hit HJKL)…

                                                                1. 1

                                                                  Are there any advantages that Hurd has in contrast to e.g. Linux or FreeBSD / OpenBSD (or even OpenIndiana)? For now, I guess they are only playing catch up, but maybe Hurd some unique features.

                                                                  1. 3

                                                                    Hurd has translators, a bit like FUSE, but quite a bit more powerful.

                                                                    It also has the concept of a UID/GID-less user, and a way to elevate privileges while starting from none. That is, you can run a server without a user, witch pretty much no privileges, and only elevate your privileges once that is required, for as short a time, as possible. This - in theory - is better than dropping privileges, in my opinion.

                                                                    Very few things make use of it, though.

                                                                  1. 5

                                                                    An experienced keyboard user with a keyboard is much faster than an experienced mouse user with a mouse. A non-trained user might be faster with a mouse than a keyboard, because fast keyboard usage requires more training.

                                                                    1. 7

                                                                      Like the article says too, this depends largely on what you wish to accomplish. There are tasks which will almost always be faster with a mouse. Aiming in FPS games comes to mind as a spectacular example.

                                                                      1. 1

                                                                        FPS gaming is just one specialized example. For general computer use (web surfing, programming, email, etc.), experienced keyboard users are much faster than experienced mouse users, especially when using tools that are made for keyboard enthusiasts:

                                                                        • vim
                                                                        • xmonad
                                                                        • tmux
                                                                        • pentadactyl
                                                                        • etc.
                                                                        1. 6

                                                                          This sounds a lot like the point the post was making. People who like keyboards think they’re faster with keyboards even when they’re slower.

                                                                          1. 4

                                                                            Well, how many professional gamers use any of those? Or, to give another example: a lot of things in Photoshop or the GIMP are going to be much faster - and much easier - with a mouse, than with a keyboard. Some other things in both are going to be faster with a keyboard.

                                                                            Point still is: the keyboard is not always faster. For a lot of things, it is. For a lot of other things, it is not. And how much of these each person uses varies from person to person.

                                                                            1. 4

                                                                              I’m not a gamer or designer, but some things in those programs are probably faster with a mouse. GIMP and Inkscape are faster with knowledge of keyboard though. It’s much faster to hit ‘o’ than to grab a mouse and find the eyedropper tool in GIMP, or ctrl-shift-f in Inkscape to open the fill settings.

                                                                              Web surfing, text editing, window management, and other common tasks are unambiguously faster with a keyboard (by a trained keyboard user). It’s worth the time investment.

                                                                              1. 4

                                                                                Shortcuts are faster with the keyboard, yes. But once you selected the eyedropper tool, will you use the mouse, or the keyboard to do something with it?

                                                                                It is worth investing into using the keyboard efficiently - I never said otherwise. But it will never be unambiguously faster for everything. It will always be “it depends”.

                                                                                1. 1

                                                                                  Sorry, I might not have explained my point well. I don’t mean literally for every single hand movement – only that someone who is experienced with advanced keyboard control will be much faster than an advanced mouse user who doesn’t use many keyboard commands, all other things being equal.

                                                                      1. 29

                                                                        Hmm. I have just spent a week or two getting my mind around systemd, so I will add a few comments….

                                                                        • Systemd is a Big step forward on sysv init and even a good step forward on upstart. Please don’t throw the baby out with the bathwater in trying achieve what seems to be mostly political rather than technical aims. ie.

                                                                        ** The degree of parallelism achieved by systemd does very good things to start up times. (Yes, that is a critical parameter, especially in the embedded world)

                                                                        ** Socket activation is very nifty / useful.

                                                                        ** There are a lot of learning that has gone into things like dbus https://lwn.net/Articles/641277/ While there are things I really don’t like about dbus (cough, xml, cough)…. I respect the hard earned experience encoded into it)

                                                                        ** Systemd’s use of cgroups is actually a very very nifty feature in creating rock solid systems, systems that don’t go sluggish because a subsystem is rogue or leaky. (But I think we are all just learning to use it properly)

                                                                        ** The thought and effort around “playing nice” with distro packaging systems via “drop in” directories is valuable. Yup, it adds complication, but packaging is real and you need a solution.

                                                                        ** The thought and complication around generators to aid the transition from sysv to systemd is also vital. Nobody can upgrade tens of thousands of packages in one go.

                                                                        TL;DR; Systemd actually gives us a lot of very very useful and important stuff. Any competing system with the faintest hope of wide adoption has a pretty high bar to meet.

                                                                        The biggest sort of “WAT!?” moments for me around systemd is that it creates it’s own entirely new language… that is remarkably weaker even than shell. And occasionally you find yourself explicitly invoking, yuck, shell, to get stuff done.

                                                                        Personally I would have preferred it to be something like guile with some addons / helper macros.

                                                                        1. 15

                                                                          I actually agree with most of what you’ve said here, Systemd is definitely trying to solve some real problems and I fully acknowledge that. The main problem I have with Systemd is the way it just subsumes so much and it’s pretty much all-or-nothing; combined with that, people do experience real problems with it and I personally believe its design is too complicated, especially for such an essential part of the system. I’ll talk about it a bit more in my blog (along with lots of other things) at some stage, but in general the features you list are good features and I hope to have Dinit support eg socket activation and cgroups (though as an optional rather than mandatory feature). On the other hand I am dead-set that there will never be a dbus-connection in the PID 1 process nor any XML-based protocol, and I’m already thinking about separating the PID 1 process from the service manager, etc.

                                                                          1. 9

                                                                            Please stick with human-readable logs too. :)

                                                                            1. 6

                                                                              Please don’t. It is a lot easier to turn machine-readable / binary logs to human-readable than the other way around, and machines will be processing and reading logs a lot more than humans.

                                                                              1. 4

                                                                                Human-readable doesn’t mean freeform. It can be machine-readable too. At my last company, we logged everything as date, KV pairs, and only then freeform text. It had a natural mapping to JSON and protocol buffers after that.

                                                                                https://github.com/uber-go/zap This isn’t what we used, but the general idea.

                                                                                1. 3

                                                                                  Yeah, you can do that. But then it becomes quite a bit harder to sign, encrypt, or index logs. I still maintain that going binary->human readable is more efficient, and practical, as long as computers do more processing on the logs than humans do.

                                                                                  Mind you, I’m talking about storage. The logs should be reasonably easy for a human to process when emitted, and a mapping to a human-readable format is desirable. When stored, human-readability is, in my opinion, a mistake.

                                                                                  1. 2

                                                                                    You make good points. It’s funny, because I advocated hard for binary logs (and indeed stored many logs as protocol buffers on Kafka; only on the filesystem was it text) from systems at $dayjob-1, but when it comes to my own Linux system it’s a little harder for me to swallow. I suppose I’m looking at it from the perspective of an interactive user and not a fleet of Linux machines; on my own computer I like to be able to open my logs as standard text without needing to pipe it through a utility.

                                                                                    I’ll concede the point though: binary logs do make a lot more sense as building blocks if they’re done right and have sufficient metadata to be better than the machine-readable text format. If it’s a binary log of just date + facility + level + text description, it may as well have been a formatted text log.

                                                                              2. 2

                                                                                So long as they accumulate the same amount of useful info…. and is machine parsable, sure.

                                                                                journalctl spits out human readable or json or whatever.

                                                                                I suspect to achieve near the same information density / speed as journalctl with plain old ascii will be a hard ask.

                                                                                In my view I want both. Human and machine readable… how that is done is an implementation detail.

                                                                              3. 4

                                                                                I’m sort of curious about which “subsume everything” bits are hurting you in particular.

                                                                                For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.

                                                                                I have doubts about how much of the networkd / resolved should be part of systemd…. except something that collaborates with the startup infrastructure is required. ie. I suspect your choices in dinit will be slightly harsh…. modding dinit to play nice with existing network managers or modding existing network managers to play nice with dinit or subsuming the function of network management or leaving fairly vital chunks of functionality undone and undoable.

                                                                                Especially in the world of hot plug devices and mobile data….. things get really really hairy.

                                                                                I am dead-set that there will never be a dbus-connection in the PID 1

                                                                                You still need a secure way of communicating with pid 1….

                                                                                That said, systemd process itself could perhaps be decomposed into more processes than it currently is.

                                                                                However as I hinted…. there are things that dbus gives you, like bounded trusted between untrusted and untrusting and untrustworthy programs that is hard to achieve without reimplementing large chunks of dbus….

                                                                                …and then going through the long and painful process of learning from your mistakes that dbus has already gone through.

                                                                                Yes, I truly hate xml in there…. but you still need some security sensitive serialization mechanism in there.

                                                                                ie. Whatever framework you choose will still need to enforce the syntactic contract of the interface so that a untrusted and untrustworthy program cannot achieve a denial of service or escalation of privilege through abuse of a serialized interface.

                                                                                There are other things out there that do that (eg. protobuffers, cap’n’proto, …), but then you still in a world where desktops and bluetooth and network managers and …….. need to be rewritten to use the new mechanism.

                                                                                1. 3

                                                                                  For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.

                                                                                  systemd’s handling of mounting is beyond broken. It’s impossible to get bind mounts to work successfully on boot, nfs mounts don’t work on boot unless you make systemd handle it with autofs and sacrifice a goat, and last week I had a broken mount that couldn’t be fixed. umount said there were open files, lsof said none were open. Had to reboot because killing systemd would kill the box anyway.

                                                                                  It doesn’t even start MySQL reliably on boot either. Systemd is broken. Stop defending it.

                                                                                  1. 3

                                                                                    For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.

                                                                                    There are a growing number of virtual filesystems that Linux systems expect or need to be mounted for full operation - /proc, /dev, /sys and cgroups all have their own - but these can all be mounted in the traditional way: by running ‘/bin/mount’ from a service. And because it’s a service, dependencies on it can be expressed. What Systemd does is understand the natural ordering imposed by mount paths as implicit dependencies between mount units, which is all well and good but which could also be expressed explicitly in service descriptions, either manually (how often do you really change your mount hierarchies…) or via an external tool. It doesn’t need to be part of the init system directly.

                                                                                    (Is it bad that systemd can do this? Not really; it is a feature. On the other hand, systemd’s complexity has I feel already gotten out of hand. Also, is this particular feature really giving that much real-world benefit? I’m not convinced).

                                                                                    I suspect your choices in dinit will be slightly harsh…. modding dinit to play nice with existing network managers or modding existing network managers to play nice with dinit

                                                                                    At this stage I want to believe there is another option: delegating Systemd API implementation to another daemon (which communicates with Dinit if and as it needs to). Of course such a daemon could be considered as part of Dinit anyway, so it’s a fine distinction - but I want to keep the lines between the components much clearer (than I feel they are in Systemd).

                                                                                    I believe in many cases the services provided by parts of Systemd don’t actually need to be tied to the init system. Case in point, elogind has extraced the logind functionality from systemd and made it systemd-independent. Similarly there’s eudev, the Gentoo fork of the udev device node management daemon which extracts it from systemd.

                                                                                    You still need a secure way of communicating with pid 1…

                                                                                    Right now, that’s via root-only unix socket, and I’d like to keep it that way. The moment unprivileged processes can talk to a privileged process, you have to worry about protocol flaws a lot more. The current protocol is compact and simple. More complicated behavior could be wrapped in another daemon with a more complex API, if necessary, but again, the boundary lines (is this init? is this service management? or is this something else?) can be kept clearer, I feel.

                                                                                    Putting it another way, a lot of the parts of Systemd that required a user-accessible API just won’t be part of Dinit itself: they’ll be part of an optional package that communicates the Dinit only if it needs to, and only by a simple internal protocol. That way, boundaries between components are more clear, and problems (whether bugs or configuration issues) are easier to localise and resolve.

                                                                                  2. 1

                                                                                    On the other hand I am dead-set that there will never be a dbus-connection in the PID 1 process nor any XML-based protocol

                                                                                    Comments like this makes me wonder what you actually know about D-Bus and what you think it uses XML for.

                                                                                    1. 2

                                                                                      I suppose you are hinting that I’ve somehow claimed D-Bus is/uses an XML-based protocol? Read the statement again…

                                                                                      1. 1

                                                                                        It certainly sounded like it anyway.

                                                                                  3. 8

                                                                                    Systemd solves (or attempts to) some actually existing problems, yes. It solves them from a purely Dev(Ops) perspective while completely ignoring that we use Linux-based systems in big part for how flexible they are. Systemd is a very big step towards making systems we use less transparent and simple in design. Thus, less flexible.

                                                                                    And if you say that’s the point: systems need to get more uniform and less unique!.. then sure. I very decidedly don’t want to work in an industry that cripples itself like that.

                                                                                    1. 8

                                                                                      Hmm. I strongly disagree with that.

                                                                                      As a simple example, in sysv your only “targets” were the 7 runlevels. Pretty crude.

                                                                                      Alas the sysv simplicity came at a huge cost. Slow boots since it was hard to parallelize, and Moore’s law has stopped giving us more clock cycles… it only gives us more cores these days.

                                                                                      On my ubuntu xenial box I get… locate target | grep -E ‘^/(run|etc|lib)/.*.target$’ | grep -v wants | wc 61 61 2249

                                                                                      (Including the 7 runlevels for backwards compatibility)

                                                                                      ie. Much more flexibility.

                                                                                      ie. You have much more flexibility than you ever had in sysv…. and if you need to drop into a whole of shell (or whatever) flexibility…. nothing is stopping you.

                                                                                      It’s actually very transparent…. the documentation is actually a darn sight better that sysv init ever was and the source code is pretty readable. (Although at the user level I find I can get by mostly by looking at the .service files and guessing, it’s a lot easy to read than a sysv init script.)

                                                                                      So my actual experience of wrangling systemd on a daily basis is it is more transparent and flexible than what we had before…..

                                                                                      A bunch of the complexity is due to the need to transition from sysv/upstart to systemd.

                                                                                      I can see on my box a huge amount of crud that can just be deleted once everything is converted.

                                                                                      All the serious “Huh!? WTF!?” moments in the last few weeks have been around the mishmash of old and new.

                                                                                      Seriously. It is simpler.

                                                                                      That said, could dinit be even simpler?

                                                                                      I don’t know.

                                                                                      As I say, systemd has invented it’s own quarter arsed language for the .unit files. Maybe if dinit uses a real language…. (I call shell a half arsed language)

                                                                                      1. 11

                                                                                        You are comparing systemd to “sysv”. That’s a false dichotomy that was very agressively pushed into every conversation about systemd. No. Those are not the only two choices.

                                                                                        BTW, sysvinit is a dumb-ish init that can spawn processes and watch over them. We’ve been using it as more or less just a dumb init for the last decade or so. What you’re comparing systemd to is an amorphous, distro-specific blob of scripts, wrappers and helpers that actually did the work. Initscripts != sysvinit. Insserv != sysvinit.

                                                                                        1. 4

                                                                                          Ok, fair cop.

                                                                                          I was using sysv as a hand waving reference to the various flavours of init /etc/init.d scripts, including upstart that Debian / Ubuntu have been using prior to systemd.

                                                                                          My point is not to say systemd is the greatest and end point of creation… my point is it’s a substantial advance on what went before (in yocto / ubuntu / debian land) (other distros may have something better than that I haven’t experienced.)

                                                                                          And I wasn’t seeing anything in the dinit aims and goals list yet that was making me saying, at the purely technical level, that the next step is on it’s way.

                                                                                    2. 3

                                                                                      Personally I would have preferred it to be something like guile with some addons / helper macros.

                                                                                      So, https://www.gnu.org/software/shepherd/ ?

                                                                                      Ah, no, you probably meant just the language within systemd. But adding systemd-like functionality to The Shepherd would do that. I think running things in containers is in, or will be, but maybe The Shepherd is too tangled up in GuixSD for many people’s use cases.

                                                                                    1. 2

                                                                                      Very small nitpick: this only works if /bin/sh is bash (or something that supports $((...))). Works like a charm after telling it to run with bash.

                                                                                      I missed the original sct thread, and redshift felt heavy, so this little script + sct will be of great use to me. Thanks!

                                                                                      1. 4

                                                                                        Piling on the nitpick wagon, error messages should go to stderr rather than stdout:

                                                                                        - echo "Please install sct!"
                                                                                        + echo >&2 "Please install sct!"
                                                                                          exit 1;
                                                                                        
                                                                                        1. 2

                                                                                          FWIW, dash handles this properly as well:

                                                                                          % file /bin/sh
                                                                                          /bin/sh: symbolic link to dash
                                                                                          % sh
                                                                                          $ echo $((1440 - 720))
                                                                                          720
                                                                                          

                                                                                          OpenBSD’s ksh, same goes for zsh:

                                                                                          $ echo $((1440 - 720))
                                                                                          720
                                                                                          

                                                                                          So I guess on the majority of systems you’d run X on, this won’t be an issue ;)

                                                                                          Now, my own nitpick - I’ve ran shellcheck and fixed some warnings:

                                                                                          #!/bin/sh
                                                                                          
                                                                                          # Copyright (c) 2017 Aaron Bieber <aaron@bolddaemon.com>
                                                                                          #
                                                                                          # Permission to use, copy, modify, and distribute this software for any
                                                                                          # purpose with or without fee is hereby granted, provided that the above
                                                                                          # copyright notice and this permission notice appear in all copies.
                                                                                          #
                                                                                          # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
                                                                                          # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
                                                                                          # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
                                                                                          # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
                                                                                          # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
                                                                                          # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
                                                                                          # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
                                                                                          
                                                                                          S=4500
                                                                                          INC=2
                                                                                          SCT=$(which sct)
                                                                                          
                                                                                          if [ ! -e "$SCT" ]; then
                                                                                              echo "Please install sct!"
                                                                                              exit 1;
                                                                                          fi
                                                                                          
                                                                                          setHM() {
                                                                                              H=$(date +"%H" | sed -e 's/^0//')
                                                                                              M=$(date +"%M" | sed -e 's/^0//')
                                                                                              HM=$((H*60 + M))
                                                                                          }
                                                                                          
                                                                                          setHM
                                                                                          
                                                                                          if [ $HM -gt 720 ]; then # t > 12:00
                                                                                              for _ in $(jot $((1440 - HM)));  do
                                                                                                  S=$((S+INC))
                                                                                              done
                                                                                          else # t <= 12:00
                                                                                              for _ in $(jot $HM); do
                                                                                                  S=$((S+INC))
                                                                                              done
                                                                                          fi
                                                                                          
                                                                                          while true; do
                                                                                              setHM
                                                                                          
                                                                                              if [ $HM -gt 720 ]; then
                                                                                                  S=$((S-INC))
                                                                                              else
                                                                                                  S=$((S+INC))
                                                                                              fi
                                                                                          
                                                                                              $SCT $S
                                                                                          
                                                                                              sleep 60
                                                                                          done
                                                                                          

                                                                                          Also note that on linux systems, the jot binary is often not available (I just discovered it today, it’s an OpenBSD utility). At least on void linux one can install it with the outils package.

                                                                                          Either way, very useful - no need to run sct by hand now :)

                                                                                          1. 2

                                                                                            Odd, my dash at home barfed on the script. I now checked again, and it errors out on function setHM { .. }: it wants setHM () { ... } instead.

                                                                                            Not quite sure why I remember $((...)) being a problem…

                                                                                            1. 1

                                                                                              function, even though technically a bashism, originated in ksh.

                                                                                              In terms of $((...)) being a problem, you are most likely referring to ((...)) which is indeed a bashism. Alternatively, you might have come across ++ or -- in $((...)), which are not required by POSIX.

                                                                                            2. 1

                                                                                              for _ in $(jot $((1440 - HM))); do

                                                                                              huh, I had no idea _ was a thing, neat!

                                                                                              1. 1

                                                                                                $_ holds the last argument of the preivously run command. Overwriting it does no harm here and in similar places, so I occasionally make use of it (both to silence shellcheck and to signal the value won’t be used). It’s possible though some folks might object :)

                                                                                              2. 1

                                                                                                Also note that on linux systems, the jot binary is often not available (I just discovered it today, it’s an OpenBSD utility).

                                                                                                You can use seq instead of jot on linux.

                                                                                              3. 1

                                                                                                In similar vein, I’d suggest to replace [ with [[ because the latter is a bash/zsh/dash builtin.

                                                                                                1. 1

                                                                                                  Its a myth, you can check it with type [, both are builtin, [[ is only available in larger shells, [ is POSIX. The [ should be preferred for #!/bin/sh portability.

                                                                                                2. 1

                                                                                                  $((…)) is very much POSIX.

                                                                                                1. 13

                                                                                                  Soo…

                                                                                                  We are currently working on a new license for Kryptonite. For now, the code is released under All Rights Reserved.

                                                                                                  That means that right now, I can do nothing with the code. I likely shouldn’t even look at it. Secondly, the bold part (emphasis mine): no, just no. Anything with a custom license is going to end up as a disaster.

                                                                                                  Looking at the FAQ:

                                                                                                  curl https://krypt.co/kr | sh

                                                                                                  And this is software I should trust? Yeah, no.

                                                                                                  1. 4

                                                                                                    And this is software I should trust? Yeah, no.

                                                                                                    curl | sh over HTTPS being insecure is FUD. It is just as secure as .deb/tarball over HTTPS, which for some reason I never hear people complaining about.

                                                                                                    1. 10

                                                                                                      As a release engineer, the thing that bugs me the most about ‘curl | bash’ and that I never see mentioned, is that this format of installer almost never has a version number on it. I could run their script to install the thing on my system and make sure everything works. The next day, I can use the same command to install on another system and I don’t know that I got the same script and that the same things will get installed.

                                                                                                      Package Managers enforce versioning and checksumming so you know you got the same, expected software.

                                                                                                      ‘curl | bash’ tells me, “We’re too lazy to package a release. You handle versioning, checksumming, and verification yourself”. And people accept that because they’re too lazy to do those things, too.

                                                                                                      1. 4

                                                                                                        This is a good point, curl|sh also doesn’t really have a single answer for “how do i upgrade”.

                                                                                                        For some it’s ‘run the same curl|sh again’, for others this may wipe your current installation and data.

                                                                                                        Honestly the most ridiculous ones (in terms of rube goldbergness) are the 200 line shell scripts that add a custom apt repo, and then install a heap of dependencies, before installing the main package.

                                                                                                        Because declaring dependencies in the package is apparently as hard as giving people instructions about creating a .list file (or providing a release .deb)

                                                                                                        1. 5

                                                                                                          As someone who has had to do the reverse-engineering for those types of scripts, I couldn’t agree more.

                                                                                                          They’re all symptoms of the same problem, right up there with the use of npm and docker. Not sure what to call it… Misguided inaptitude?

                                                                                                          1. 6

                                                                                                            “Cargo culting cool-kid bullshit” comes to mind, but I may just be angry.

                                                                                                      2. 9

                                                                                                        There is a huge difference between a deb and curl | sh though, in that I can extract the contents of the deb without running any of the scripts (dpkg-deb -x). It has a well defined format, too. I can easily uninstall it, and so on and so forth.

                                                                                                        Figuring out what a random shell script does is not quite the same.

                                                                                                        Also read this - https won’t protect you from malicious code. Precise installation steps, or packages one can inspect more easily, are far superior to curl|sh. If someone does not take the effort to explain how to install their software, why should I trust them that they’ll do anything else right?

                                                                                                        1. 2

                                                                                                          While someone could do malicious/stupid things in the maintainer scripts of a .deb, I’ve literally never seen that happen.

                                                                                                          With a .deb, you can see what files will be installed where, you can extract the contents, and (barring the aforementioned stupidity in maintainer scripts) you can REMOVE the software, cleanly.

                                                                                                          With a .sh script, well fuck you, read the whole god damn thing, and then go fetch whatever shit it downloads and examine that too.

                                                                                                          1. 1

                                                                                                            With a .deb, you can see what files will be installed where, you can extract the contents, and (barring the aforementioned stupidity in maintainer scripts) you can REMOVE the software, cleanly.

                                                                                                            deb files have an embedded shell script that performs installation tasks, called debian/rules that, “fuck you”, you need to read the whole thing.

                                                                                                            I’ve literally never seen that happen.

                                                                                                            There’s also all those downgrade attacks that used to be popular.

                                                                                                            1. 9

                                                                                                              deb files have an embedded shell script that performs installation tasks, called debian/rules that, “fuck you”, you need to read the whole thing.

                                                                                                              No, they do not. debian/rules is used for building a debian package, not when installing. They can have various scripts that run during installation, but you can extract the contents without running any of those (dpkg-deb -x). So no, you do not have to read the whole thing, if you want to have a look at what it installs and where. You can even list the files, without extracting them.

                                                                                                              The same can’t be said about a curl|sh thing.

                                                                                                              As for the two links: Both do their thing in the postinst. One can extract the files only, nullifying their malicious behaviour. You can look at the files without running code. With curl|sh, you can’t. You will have to download and read the shell script to figure out where to get the files from.

                                                                                                              1. 0

                                                                                                                No, they do not. debian/rules is used for building a debian package, not when installing.

                                                                                                                Fine. Everything I said still applies to postinst though.

                                                                                                                The same can’t be said about a curl|sh thing.

                                                                                                                Sure you can. curl > a.sh read it, then run it. Same as with postinst.

                                                                                                                1. 3

                                                                                                                  Point is, I can look at the files without running the postinst. I have the files without any code to run, or look at. Can’t say the same about the shell script. I still have to go through it and download the files separately.

                                                                                                                  Not nearly the same, not by a long shot.

                                                                                                                  1. -2

                                                                                                                    Point is, you can look at a shell script without running it.

                                                                                                                    Or maybe you can’t, and you should learn how to improve that?

                                                                                                                    1. 6

                                                                                                                      Your choice to insult someone who disagrees with you reflects on you poorly (not to mention it is extremely unlikely to be true).

                                                                                                                      My bash is excellent, so I decided to audit a few curl | install scripts awhile back.

                                                                                                                      The first one took over an hour to review - it pulled in quite a few different other scripts from several domains, none of which did anything obviously untowards (although given how many ways there are to hide a subtle backdoor in a shell script I wouldn’t be that confident).

                                                                                                                      Anything that shrinks the amount of bash code I have to read to audit a package is a win, and in practice deb postinstall scripts are much shorter.

                                                                                                                      1. 0

                                                                                                                        Your choice to insult someone who disagrees with you reflects on you poorly (not to mention it is extremely unlikely to be true).

                                                                                                                        I think it’s interesting that you find it insulting if someone observes you cannot do something.

                                                                                                                        Anything that shrinks the amount of bash code I have to read to audit a package is a win, and in practice deb postinstall scripts are much shorter.

                                                                                                                        You can’t say “well, I looked at less code so it must be more trustworthy” – the code actually has to be shorter. You still have to look at the binaries in the package, or what’s the point?

                                                                                                                        My mind is boggled that someone thinks there is something about an archive file containing another archive file, with executable scripts at both layers, more trustworthy than an executable script, when they don’t bother reading either.

                                                                                                                        1. 1

                                                                                                                          observation

                                                                                                                          You observed nothing of the sort; you insinuated, on no basis other than a disagreement, a lack of competence with one of the primary languages used by package maintainers in someone who is one.

                                                                                                                          1. -1

                                                                                                                            Nonsense. He said he can’t look at a script, and doesn’t know how to save files with curl.

                                                                                                                      2. 5

                                                                                                                        You are, again, missing the point. With a package, I have access to all the files, and the scripts. I can look at the files without reading the scripts.

                                                                                                                        With a curl|sh, I do not have the files, and I have to read the script to get access to the files.

                                                                                                                        Being able to is one thing, having to is another, and willing to is a third.

                                                                                                                        1. -2

                                                                                                                          You are, again, missing the point. With a package, I have access to all the files, and the scripts. I can look at the files without reading the scripts.

                                                                                                                          The type slower.

                                                                                                                          I actually don’t get your point.

                                                                                                                          I think people trust .deb files more than they trust .sh files, and my point is that there’s no reason to.

                                                                                                                          If you think there is, then you’re going to have to explain why, and you’re going to have to do better than dpkg-deb -x because that’s a crock of dogshit that tries to make up some cocked up definition for “secure” that has nothing to do with reality: It doesn’t keep more people safe from harm even hypothetically.

                                                                                                                          With a curl|sh, I do not have the files, and I have to read the script to get access to the files. Being able to is one thing, having to is another, and willing to is a third.

                                                                                                                          Don’t insult my intelligence: I haven’t done that to you.

                                                                                                                          You don’t have the files when you do curl -O && dpkg -i either, which is why you don’t do that. Why would you do that, even if the website says to?

                                                                                                                          Because we’re talking about trust. I trust esl-erlang even though they tell me (basically) to do that, because at the end of the day, I’m not going to audit the binaries in the archive. I’m going to trust their TLS certificate, just like I’ve trusted the one at debian.org.

                                                                                                                          You don’t want to trust it, yes Virginia, you absolutely need to read the whole fucking thing. Not just the postinst, but also the things in the tarball that you are ignoring.

                                                                                                                          1. 4

                                                                                                                            I think people trust .deb files more than they trust .sh files, and my point is that there’s no reason to.

                                                                                                                            There actually is. Because with a deb file, it is a lot easier to ignore the script parts, and have a look at the files. Obviously, if you blindly install one, that is in no way more secure than curl|sh - but that’s not what I’m talking about. I’m talking about a way to inspect what it wants to install. With a package (deb, rpm or even a tarball), I can do that. I can download it, and have the files.

                                                                                                                            With curl|sh, I first have to read the whole script, and then inspect the files.

                                                                                                                            I hope we can agree that having the files at hand is a better starting point than first having to figure out where and how to get them from. With a shell script, I’d have to do additional reading, which I don’t need to do when facing a package or a tarball.

                                                                                                                            If you think there is, then you’re going to have to explain why, and you’re going to have to do better than dpkg-deb -x because that’s a crock of dogshit that tries to make up some cocked up definition for “secure” that has nothing to do with reality: It doesn’t keep more people safe from harm even hypothetically.

                                                                                                                            Well, for one, dpkg-deb -x is not going to run random commands on my system (because it just extracts a tarball, and does not run maintainer scripts). Nor will it put files in random places (because it puts them in a directory I point it at). I’ll have more control over what it does. That is safer than running a random shell script.

                                                                                                                            And yes, I could read a shell script. But I have better things to waste my time on than read a random script, because people thought that making a tarball is too hard.

                                                                                                                            I trust deb packages not because they are inherently more secure - they are not. I trust them more, because I have tools to help me inspect their contents. Or, to put it another way: I distrust curl|sh because it has so many disadvantages, and so little gains over even a simple tarball, that I fail to see its appeal. And even worse, the suggestion to pipe a random script into your shell, coming from the developers of a tool aimed at improving security is just… insane. THAT is my problem. If they’d suggest blindly installing a debian package, I’d call that bad practice too. Now, if they provide a repo, or a tarball, that’s a whole different story.

                                                                                                                            You don’t have the files when you do curl -O && dpkg -i either, which is why you don’t do that. Why would you do that, even if the website says to?

                                                                                                                            Of course I won’t run dpkg -i. I’ll run dpkg-deb -x foo.deb /tmp/blah, and voila, I have the files without any of the random scripts run. Can you do the same with a curl|sh thing? No. You’d have to read the whole thing, download the files, and then inspect them. With a package, or even a tarball, this is a whole lot easier. There can still be maintainer scripts, but those are typically small. Not to mention, that I won’t have to read any scripts just to be able to look at the binaries.

                                                                                                                            More secure, safer? Probably not. Easier? Less error prone? Yes. And that is my point. A directly downloadable package is easier to verify.

                                                                                                                            If they’d suggest curl -O && dpkg -i, I’d call that bad practice too, by the way. Installing random stuff is bad. Having an easy way to inspect random stuff is, therefore, better than also having to read and understand random shell scripts on top of everything else.

                                                                                                                            Not just the postinst, but also the things in the tarball that you are ignoring.

                                                                                                                            No, I’m not ignoring. I’m extracting them with dpkg-deb -x precisely because I want to look at them. And with a debian package, I can do that without having to read a random shell script to even have access to the files.

                                                                                                                            1. 0

                                                                                                                              Obviously, if you blindly install one, that is in no way more secure than curl|sh - but that’s not what I’m talking about

                                                                                                                              However it’s what I’m talking about.

                                                                                                                              So thank you for conceding to my point and agreeing with me.

                                                                                                                              I hope we can agree that having the files at hand is a better starting point than first having to figure out where and how to get them from. With a shell script, I’d have to do additional reading, which I don’t need to do when facing a package or a tarball.

                                                                                                                              No, I can’t agree with that.

                                                                                                                              The biggest problem with Debian and Linux is the fact that the tiniest hole can often be exploited into something bigger.

                                                                                                                              If you’re relying on your ability to read, then you actually need to be able to read, and you actually need to read it. The underhanded C contest is a fantastic example of why you shouldn’t rely on the former, and the fact that you think you can “skip reading the files” is part of what’s wrong with the former.

                                                                                                                              If you are trying to establish trust, you need to find another way.

                                                                                                                              This is a more interesting conversation to have, but the tools are very bad.

                                                                                                                              Can you do the same with a curl|sh thing? No.

                                                                                                                              Wrong. curl | tee a.sh

                                                                                                                              And with a debian package, I can do that without having to read a random shell script to even have access to the files.

                                                                                                                              How about this: I’ll demonstrate I know what you’re saying by restating it, then you try to explain what you think I’m saying.

                                                                                                                              You’re saying that unpacking a binary archive puts assets in a more convenient format for review than (mentally) unpacking them from a shell script.

                                                                                                                              You’re admitting that if you skip the review step in either case, then there is no security difference between the two formats.

                                                                                                                              You believe that people are more likely to review a debian package than a shell script, and that this contributes to security.

                                                                                                                              Installing random stuff is bad.

                                                                                                                              Do you think that I’m arguing that installing random stuff is good? Or are you just trying to repeat enough obviously correct things that everyone will assume everything you’re saying is correct?

                                                                                                                              What’s your point here?

                                                                                                                2. 4

                                                                                                                  Not only is debian/rules not run during package installation (as mentioned by @algernon, it’s used during package build), debian/rules isn’t even a shell script. It’s a makefile.

                                                                                                                  Those two links are examples of how to do it, not examples of someone doing it.

                                                                                                                  A shell script via curl | sh (or curl | bash if you’re particularly terrible and don’t write portable shell scripts) also can’t realistically be signed, to prevent tampering/replacement.

                                                                                                                  Apparently creating a 1 line .list file and importing a GPG key is too much for people, so downloading the script, a signature and a GPG key is basically never going to happen.

                                                                                                                  1. -1

                                                                                                                    A shell script via curl | sh (or curl | bash if you’re particularly terrible and don’t write portable shell scripts) also can’t realistically be signed, to prevent tampering/replacement.

                                                                                                                    If you use https: it is signed to prevent tampering/replacement.

                                                                                                                    Apparently creating a 1 line .list file and importing a GPG key is too much for people, so downloading the script, a signature and a GPG key is basically never going to happen.

                                                                                                                    The CA can revoke a SSL key. They can’t revoke a GPG key.

                                                                                                                    1. 2

                                                                                                                      If you use https: it is signed to prevent tampering/replacement

                                                                                                                      How does https tell you that the file you downloaded is the same file the person/company you trust put on the server? The web server just gives you whatever file is on disk, and TLS encrypts it in transport.

                                                                                                                      I’m talking about gpg signing so you know the file in question is the same file the author you trust placed there, and it hasn’t been modified by someone else.

                                                                                                                      The CA can revoke a SSL key

                                                                                                                      Which is fucking irrelevant, as we’re talking about signing the file, not transport layer encryption.

                                                                                                                      1. -1

                                                                                                                        I’m talking about gpg signing so you know the file in question is the same file the author you trust placed there, and it hasn’t been modified by someone else.

                                                                                                                        Yes, by neatly sidestepping the whole point of establishing that the author is deserving of that trust in the first place.

                                                                                                                        A GPG key downloaded from the website is no different, and in many ways may be less secure, since you seem to think a GPG key downloaded from the website is different.

                                                                                                                        The CA can revoke a SSL key

                                                                                                                        Which is fucking irrelevant, as we’re talking about signing the file, not transport layer encryption.

                                                                                                                        It’s something that SSL can do better than GPG. Given that GPG is in the proposed use less secure than SSL, I think that’s pretty “fucking” relevant.

                                                                                                                        1. 3

                                                                                                                          Who said the gpg key had to be downloaded from the same server? Gpg key servers are a thing.

                                                                                                                          Comparing ssl (by which I assume you mean tls) and gpg is stupid. You can access a gpg signed apt repo over tls, they’re not mutually exclusive and they’re not even used for the same task here.

                                                                                                                          1. 1

                                                                                                                            Who said the gpg key had to be downloaded from the same server? Gpg key servers are a thing.

                                                                                                                            There’s no difference between a key I’ve uploaded to one keyserver and a key I’ve uploaded to another keyserver.

                                                                                                                            You’re delusional if you think otherwise.

                                                                                                                            Comparing ssl (by which I assume you mean tls) and gpg is stupid.

                                                                                                                            Comparing the download of a key from an unencrypted keyserver with the DH key exchange is stupid. You have no idea what you’re talking about.

                                                                                                                            You can access a gpg signed apt repo over tls, they’re not mutually exclusive and they’re not even used for the same task here.

                                                                                                                            Nobody gives a fuck if you can get debian packages using carrier pigeons.

                                                                                                                            The point is that a GPG signed package doesn’t offer any security over HTTPS if nobody is bothering to check the GPG keys, because at least someone can check the keys used for HTTPS, and at least Google is doing that.

                                                                                                                            1. 0

                                                                                                                              You’ve missed the point, multiple times now.

                                                                                                                              TLS/HTTPS can only ever at most protect you from MITM network attacks (and that assumes someone like WoSign hasn’t issued a cert for your domain to someone else, which.. you know, they did. multiple times.)

                                                                                                                              .Deb packages can be served over TLS, if you wish, to gain extra network level encryption, but they’re also GPG signed by default, so you can verify who authored them, regardless of where they have been served. Whether you choose to verify the GPG key is up to you.

                                                                                                                              You can’t realistically do that with a curl|sh script, just like you don’t get realistic safe upgrades, you don’t get realistic clean uninstall, etc etc.

                                                                                                                              1. [Comment removed by author]

                                                                                                                                1. 2

                                                                                                                                  What is just absolutely confusing to me is why are people defending this curl | sh nightmare

                                                                                                                                  Because we want an actual secure system and not security theatre.

                                                                                                                                  Telling people to install packages is just as risky as telling people to run scripts, and in some ways worse, since artificial conflicts can cause problems.

                                                                                                                                  why do people want this to become a daily part of dev?

                                                                                                                                  Ah, well this: I don’t.

                                                                                                                                  I would’ve been happier if FHS had never happened, but it did, and we’re stuck with it. It’s a massive pile of crap that has made sysadmin work a special kind of misery until things like docker and (recent) practices of repeatable builds have started mitigating.

                                                                                                                                  1. [Comment removed by author]

                                                                                                                                    1. 1

                                                                                                                                      Yes: The debian [main] repository is more secure than a shell script for all of those reasons that I agree with (except perhaps the audit-ability step: Debian packages can contain executable code which is not – by accident in practice or malice in theory – completely invertible)

                                                                                                                                      However when a distributor is starting out, competing with Debian (for whatever reason: time, etc), all they have to go on is developing that trust. These programs aren’t available in Debian yet, and might not be for some time. In this way, building and distributing a third-party .deb file is not better than building and distributing a shell script because they don’t get to take advantage of any of the benefits you mentioned, and they lose out on a very important disadvantage.

                                                                                                                                      I’m really sorry to read the insults against you personally earlier on in this discussion, it is a damn shame that people would act that way in a technical discussion and I think you are admirable for replying to it so coolly.

                                                                                                                                      I appreciate that.

                                                                                                                                      Thank you.

                                                                                                                                      1. 1

                                                                                                                                        building and distributing a third-party .deb file is not better than building and distributing a shell script

                                                                                                                                        That’s fair (re security), though I’d hope the result is at least easier to uninstall cleanly.

                                                                                                                                  2. 1

                                                                                                                                    They know how to basically write a shell script and put it in a http accessible directory, but find OS/Distro packaging (deb, rpm, macos pkg, etc) too hard/complicated?

                                                                                                                                    I would assume the biggest supporters are probably involved in projects/companies that rely on curl|sh processes

                                                                                                                                    1. 3

                                                                                                                                      That makes so little sense for the scripts that add a repo.

                                                                                                                                      I think someone had the idea that adding a repo is hard, let’s make a one-line install to get people hooked, and matters proceeded from there.

                                                                                                                                      1. 1

                                                                                                                                        Oh yeah, for those cases, I think it’s a symptom of the worst definition of “devops”: developers who can’t manage to add an apt repo source, are managing servers, and need cutesy bash one-liners to be anywhere close to productive.

                                                                                                            1. 8

                                                                                                              I’m not a web developer. I don’t like JavaScript. Yet, when I needed to write a cross-platform application, I turned to Electron, and am not regretting my decision to do so. Why? Because it saved me a lot of work: I work under Linux, and have absolutely no desire to build and test my application anywhere else, but I want to make it easy for others to do so. Electron lets me do that, and allow me to be reasonably sure that the end result will look about the same everywhere. As long as I use cross-platform libraries for non-UI funcionalities of my app, I can rest well, in the knowledge that my application will run with little to no work on all three major platforms.

                                                                                                              I did not need to research native toolkits, nor learn them, nor test the application. That, for me, is huge.

                                                                                                              I don’t care much about the resource use, because this app is not likely to be running for more than a few hours every once in a while. It doesn’t need to run all the time. Does it use a lot more memory than it would need to? Yeah, but as it doesn’t run long, and my users don’t care. Does it use more CPU? Yeah, none of my users care. It does not need to be fast. Does it drain battery? Possibly. But when the application runs, the things it does, will drain considerably more power than Electron itself, so again, I don’t care. Does it conflict with suspend, or the CPU going into battery saving mode? Perhaps. Shut it down then. It does not need to run all the time.

                                                                                                              Point is, it saves me time and effort, and I can ship something to my users much, much faster. If I had to deal with native toolkits, the application would either be linux only, or wouldn’t exist at all. As most of my users are not on Linux, and they want the application, either of these alternative options would be far worse for them than the “waste” Electron adds.

                                                                                                              It may not be the best choice for everything, but it certainly has its uses. And until a cross-platform toolkit emerges that offers a similar level of convenience for developers, Electron will have its place. I’m not going to hold my breath for an alternative.

                                                                                                              1. 2

                                                                                                                Interesting - I could see that being a pretty handy addition. I recently started using AwesomeWM again (4.0), and the API seems like it would lend itself quite nicely to doing something similar (though, I would probably use the mouse scroll.. cuz ain’t no one gonna take my ergodox away!)

                                                                                                                1. 3

                                                                                                                  Man, ergodox looks nice but I wish it was shaped a bit more like the Kinesis Advantage bowls. I basically just want the Kinesis Advantage with the Ergodox split.

                                                                                                                  1. 6

                                                                                                                    You want the Dactyl then. =)

                                                                                                                    1. 3

                                                                                                                      Why yes I do …. mops up drool.

                                                                                                                      1. 1

                                                                                                                        … or a keyboardio

                                                                                                                        1. 1

                                                                                                                          The Keyboardio Model 01 is not bowl shaped. While the keycaps do cup your fingers nicely, that’s not the same as a concave design. Mind you, I did not type enough on a concave keyboard to properly compare them to the Model01.

                                                                                                                          Nevertheless, seconded, the Model 01 is going to be an incredible keyboard. Can’t wait to have mine ;)

                                                                                                                  1. 9

                                                                                                                    I just came up with a list of dislikes in another social medium. Might as well post it here too!

                                                                                                                    1. Redefinition of common sense words (mask vs disable) - Sure, I understand the argument on why they are worded this way… I just think it’s stupid.
                                                                                                                    2. Binary logs:
                                                                                                                      • “something something, saves space…” sure…
                                                                                                                      • “something something, how is the tooling any different from an entry point like grep?….” etc. grep is well known (like the word “disable”), and log file locations are known (or can be determined quickly). Now I have to man a new tool to remember the syntax, and I have no idea where files are being stored! - Sure you can argue that to a newcomer there is virtually no difference… and they can pick up using journalctl easy enough… and I agree.. but they will also have to know to use other unix tools like grep.
                                                                                                                    3. Redefining ExecStart= in include’d sub-files - this is just stupid. How many configuration syntax(es?) that require “clearing” a variable before you redefine it?

                                                                                                                    I put on my “OpenBSD Developer” hat in the off chance that my “get off my lawn” wasn’t coming through via the above text. Maybe I can get a “Curmudgeon with a Cane and Lawn” hat?

                                                                                                                    1. 4

                                                                                                                      I’ve seen a lot of people complain about binary logs. I can understand why one may dislike them, if one’s used to grep. But in this day and age, I’d argue that if you are using grep to search your logs, you are doing something wrong (and that’s with my former syslog-ng maintainer hat on, after working years in the logging field). Plain text logs is not what most big places use. For temporary storage, until they get shipped to a central place, maybe, but most people don’t work with them directly.

                                                                                                                      BalaBit (makers of syslog-ng) have been pushing their binary format, which is used in their applicance, and syslog-ng PE too. Splunk, ES + Kibana don’t use text files, either. When I was at Cloudera as a support guy, we weren’t using grep, either. We had the logs in Hadoop, and used internal tools to search them, mine them, and so on. This is what I see in most places that deal with a lot of logs: they see the text files as a source to collect, they rarely, if ever, do anything else with them than to ship them somewhere else to store in a database they can query more efficiently.

                                                                                                                      I have previously written a post on by blog about this exact topic, with a bit more details, and a followup (linked from the top of that post).

                                                                                                                      (But since binary logs are a bit off-topic, feel free to hit me up on twitter, or e-mail or whatever, to avoid going even more off-topic here. And apologies in advance for derailing the discussion a bit, but years of working with logs made me a bit sensitive about the topic. :P)

                                                                                                                      1. 4

                                                                                                                        I don’t feel like it’s off topic too much.

                                                                                                                        I’d argue that if you are using grep to search your logs, you are doing something wrong

                                                                                                                        I agree, definitely a better way to do things at that scale, but not everyone needs a log shipping destination for their one off VPS that is being used for IRC and maybe a blog.

                                                                                                                        1. 1

                                                                                                                          In that case, the old tools aren’t in any way suprerior, except for familiarity. But the good stuff the journal brings are useful enough to learn to query logs with journalctl, in my opinion.

                                                                                                                          1. 4

                                                                                                                            Yep, and I stated that as well, however, in addition to familiarity, there is also portability. The technique for looking into logs via a “grep” (or what ever unixish tool) will apply on many more OS types, as compared to just Linux running systemd.

                                                                                                                            1. 2

                                                                                                                              Yeah, if you have to maintain N different OSes, grep may make sense. Though there will be enough subtle differences that grep vs journalctl would be the least of your worries. Back in Uni, when I was part of the CS department’s sysadmin team, we had HPUX, Ultrix, Debian, RedHat, AIX, IRIX, Solaris, FreeBSD, and a bunch of others I forgot. Grepping wasn’t fun, nor consistent across these OSes. Slightly different flags, logs being elsewhere, etc. In this case, collecting logs to a central place would have made much more sense.

                                                                                                                              If you maintain Linux boxes… then portability or applicability to another OS does not matter.

                                                                                                                              1. 5

                                                                                                                                grep makes sense for many other reasons – needing to create ad hoc pattern matches, as part of a text manipulation pipeline with, e.g., sed or cut … the list goes on. Until the richness of the existing text manipulation tools in Unix is matched by similar binary log commands, there’s little sense in moving to binary.

                                                                                                                                And, I’ll mention, ‘most big places’ do continue to use text logs.

                                                                                                                                1. 1

                                                                                                                                  You can combine a binary log query with other tools, just like you combine it with grep. You replace cat logfile | grep | whatever with logquery ... | whatever, or even logquery ... | grep | whatever. There’s nothing stopping you doing so. If the log query tool supports regexps, it will always be more powerful than grep, because it also understands structure (at least to some extent, and if you process your logs - which you should), and can do optimizations grep can not. They already match grep’s abilities, and surpass it in many ways. They also compose with your standard unix tools.

                                                                                                                                  And yeah, grep is useful elsewhere too. It is worth knowing. But learning to query logs in a much more powerful way is also worth learning, and accepting as a valid practice.

                                                                                                                                  1. 2

                                                                                                                                    could you provide an example of ‘a much more powerful way’ in practice? I’ve only been doing log diving for 27 years so maybe I’m missing something.

                                                                                                                                    1. 1

                                                                                                                                      A tad late to respond, as this thread fell off my radar, but, a few examples where queries are superior to grep & friends (all of these assume that you have all your logs at a central place, otherwise it would be a horror to grep them when the data you want spans multiple machines):

                                                                                                                                      • Whenever it comes to dates. It is much easier to write date="[2013-12-14 TO 2015-04-11]" than to figure out which files you need to grep, and filter down to dates between these only.

                                                                                                                                      • Something I do often, is follow the trail of my email: from Gnus/Emacs logs, through msmtp on localhost, postfix on my home gateway, and finally my remote VPS. If I want to see the trail of e-mail, I can just do a query with something like type=email message-id=X.

                                                                                                                                      Both of these can be done with grep & friends, but they require more work from my part. I have a computer to do the hard parts. I can just shovel everything into a database, index it in various ways, and then query whatever I want. I can even adjust the indexes with reasonable ease. With text-based logs, that would be a much larger task.

                                                                                                                        2. 3

                                                                                                                          Plain text logs is not what most big places use.

                                                                                                                          I’m not a big place. And the big places I’ve been at all used custom logging solutions with staged storage and replication, which systemd does not provide.

                                                                                                                          1. 0

                                                                                                                            I was talking about non-textual log storage, be it systemd or something else. I’m not arguing how useful (or useless) systemd’s journal is - I for example, use it just like text logs: I ship the contents somewhere else to process and store, something completely independent of systemd. The journal is nice because it’s much easier to process than most text logs, but that’s a small thing.

                                                                                                                            What I’m saying, is that binary storage is useful, and in many ways superior to text, as far as logs are concerned. A lot of custom, specialised solutions (Splunk, BalaBit’s SSB, Kibana, etc) use a non-textual database. And that’s a damn good thing they do.

                                                                                                                          2. 2

                                                                                                                            syslog-ng

                                                                                                                            …but of course syslog-ng isn’t really comparable because the protocol is well-understood and externally stable, and I can drop in rsyslog or easily implement my own if my needs aren’t those of a multi-million dollar company. The same is emphatically not true of journald.

                                                                                                                            1. 0

                                                                                                                              syslog-ng’s binary format is definitely not standard, and not compatible with anything else. It’s not even open source, unlike the journal (for which writing a reader took about an afternoon from scratch, just to prove that it is possible).

                                                                                                                              1. 3

                                                                                                                                By “the protocol” I of course mean syslog here, not whatever storage format a given tool uses. Unless I have somehow completely misunderstood how clients communicate with syslog-ng?

                                                                                                                                1. 1

                                                                                                                                  How they communicate doesn’t matter. You can forward journald-collected logs over syslog too (with rsyslog, or syslog-ng, or whatever else). It’s the storage format that most people are upset about.

                                                                                                                        1. 1

                                                                                                                          Cool hack. Reminds me of USB rotary knobs, like the PowerMate. Rotary knob seems far simpler and less prone to mechanical failures. They alos seem to be on Aliexpress and the like. Nice thing could be to have such a rotary knob on a mouse.

                                                                                                                          1. 1

                                                                                                                            That rotary knob reminds me of the scroll wheel on the Kensington Orbit

                                                                                                                          1. 5

                                                                                                                            I work in an office, with an open-office plan at that. The latter part is truly horrible indeed, but working in the same building as others has numerous benefits, too. For what its worth, the whole team could work from home, there’s nothing stopping us to do so - but we still don’t. Why? Because we do not work in isolation, neither within the team, nor within the company. It is a lot easier to talk to a few people together when they are in the same building. Trying to reach them when they are home, and follow their own schedule is a pain in the backside.

                                                                                                                            If it were only for me, myself and I, I’d agree, working from home is much more productive, and much more comfortable too. But when I have to work with others, who have a different daily rhythm than I do, then we will all come into the office, because despite all the flaws, that is still more productive for the team as a whole, than if we all stayed home.

                                                                                                                            You can make an all-remote team work, mind you, there are many examples of that. But that won’t work for all cases, and it certainly does not in the team I work with.

                                                                                                                            1. 4

                                                                                                                              Tried it, because it sounded promising: fast terminal emulator without the unnecessary bells and whistles. Fast it is, to be sure, but its font handling seems to be a tad too simple: I’m using tmux with some powerline-enabled fonts, and it can’t display those symbols, not even if I set the font to the same one I use for my daily terminal emulator.

                                                                                                                              Even worse, if I set the same font, char spacing becomes double, even after altering its config.

                                                                                                                              A promising thing, will keep an eye on it, as I’m not 100% happy with GNOME Terminal, but it’s not there yet. Judging by the issues open, my worries will likely be addressed soon. Yay!