1. -6

    It’s just an emacs.d for Emacs, nothing special. I would be far more impressed by regular, open-source Emacs using Common Lisp instead of Emacs Lisp.

    1. 13

      That’s being a bit uncharitable. Emacs ‘distributions’ (really, custom emacs.d setups) serve a valuable function: taking vanilla Emacs and customising it along witha lot of packages.

      Like you, I would love a Common Lisp Emacs, but this is great too, in its own way. It definitely looks nice. I don’t use a Mac, but if I did, maybe I’d use this.

      1.  

        Out of curiosity, why would you use this over your current Emacs configuration?

        1. -6

          How these emacs.d distributions are “valuable” in any way? If you want to type text and don’t care at all, just get notepad.exe, VS Code, Sublime Text or another silly tool that kids use these days.

          The thing about Emacs is to just start with bare bones and add features and improvements to your .emacs only if you need to, instead of reusing other people’s configuration which you won’t read or even understand, as it’s mostly overcomplicated to cover extensive cases for many users at once.

          1. 18

            How these emacs.d distributions are “valuable” in any way?

            They’re an excellent learning tool. They’re a source of new ideas for other users. They provide a way to demonstrate the possibilities. They provide a service to other users looking for a similar experience without the pain of having to do it themselves.

            another silly tool that kids use these days.

            Frankly, that’s just rude and uncalled for.

            The thing about Emacs is to just start with bare bones and add features and improvements to your .emacs only if you need to, instead of reusing other people’s configuration which you won’t read or even understand, as it’s mostly overcomplicated to cover extensive cases for many users at once.

            That’s how you use Emacs. That’s not how everyone use Emacs. There are no rules in this case. It’s open source software for a reason.

            1. 6

              This version actually does a bit of work to set the command key on Macs to become a Super key, then doing remapping so that a user can press Cmd+O to open a file instead of C-x C-f.

              The documentation is actually quite well-written, I’m impressed.

              1.  

                Thanks! I’ve tried to describe as much as possible so that someone not familiar with Emacs might want to try it.

              2. 6

                The thing about Emacs is to just start with bare bones and add features and improvements to your .emacs only if you need to, instead of reusing other people’s configuration which you won’t read or even understand, as it’s mostly overcomplicated to cover extensive cases for many users at once.

                That may be how you use Emacs, but it is certainly not the only way. There is absolutely nothing wrong with using a configuration built by someone else. There is nothing wrong with not understanding how it does what it does. It gets the job done? Success! It allows one to ease into Emacs from a point they feel comfortable with? Success!

                Custom emacs.d distributions are very, very valuable. Not just for people new to Emacs, for veterans too: there’s so much one can learn from others. I’ve been using Emacs for over 18 years, and to this day, when I come across a custom emacs.d distribution that has anything interesting about it (be that looks, key bindings, organization, packages used, you name it), I will look through it and borrow any good ideas. Some may stick, some may not, but I learn something useful from every single one of them. And that makes my Emacs so much the better.

            2.  

              It’s not Common Lisp but Guile, but… Does this qualify ? https://www.emacswiki.org/emacs/GuileEmacs

            1. 3

              Incidentally, I’m looking at https://www.lambda.cd/ right now, which seems to be in the same ballpark as Drone, but probably with even less assumptions about what build steps do.

              1. 3

                Using Clojure for the build script is neat, I like it, but like many others, LambdaCD fails short on some of my key requirements:

                • Builds should be isolated, preferably running in a container.
                • It should support triggering builds on pull requests, too.

                LambdaCD doesn’t appear to do either out of the box. A quick look suggests that one can build both on top of it, but I’d rather not, when there are other options which do what I need out of the box. Still, there are some good ideas there, will be keeping an eye on it. Thanks for mentioning it!

                1. 2

                  It does the triggering thing, although right now it’s a separate plugin (which they apparently want to ship with the main library at some point): https://github.com/flosell/lambdacd-git#using-web--or-post-commit-hooks-instead-of-polling

                  Can you explain more about isolation? My simplistic view on it was that one could simply shell out to docker build ., and generally rely on docker itself to manage images. Am I missing some gotcha here?

                  1. 3

                    Yeah, that notification thing looks good enough for implementing pull requests, though the pipeline will need to be set up in every case to support that scenario, since the pipeline is responsible for pulling the source too. Thus, it must be ready to pull it from a branch, or a different ref, or tag, or whatever. Having to implement this for every single repo is not something I’d like to do. Yes, I could implement a function to do that, and remember to use it in every pipeline, but… this is something I’d expect the CI system to do for me.

                    Looking further into LambdaCD, this part of the HOWTOs suggest that while pipelines are code, they are not in the same repository as the code to build. This means that when I add a new project, I need to touch two repos now. If I have a project with more than one branch, and I want to run different pipelines in the branches, this makes that much more complicated. For example, I have a master and debian/master branch: on master I do the normal CI stuff, on debian/master, I test the packaging only, or perhaps in addition; or I’m implementing a new feature, which requires a new dependency, so I want that branch to have that dependency installed too - both of these would be cumbersome. If, like in the case of Drone, or Travis, if the pipeline is part of the source repo, neither of these is a problem: I simply adjust .drone.yml / .travis.yml, and let the system deal with it.

                    Additionally, if someone else implements a feature, and submits a pull request, I want them to be able to modify the pipeline, to add new dependencies, or whatever is needed for the feature. But I do not want to give them access to all of my pipelines, let alone my host system. I don’t think this can be achieved with LambdaCD’s architecture.

                    Can you explain more about isolation? My simplistic view on it was that one could simply shell out to docker build ., and generally rely on docker itself to manage images. Am I missing some gotcha here?

                    It’s not that easy. First, I don’t want to run docker build .. I don’t need to build a new image for every project I have. I want each of my stages to execute in a container, by default, with no other option. I do not want the pipeline to have access to the host system under normal conditions. To better highlight what goes on under the hood, lets look at an example. See this Drone control file for example, which on one hand is reasonably simple, but it achieves quite a few things.

                    The dco and signature-check stages execute in parallel, in two different containers, using two different images (with the sources mounted as a volume). The latter only on tag events. Under the hood, this pulls down the image if it’s not available locally yet, runs them in docker with the sources mounted, and fails the build if any of them fail.

                    The bootstrap and tls-certificate stages run parallel again, using the same image, but two containers. This way if I change anything outside of the shared source directory, that won’t cause a conflict. This allows me to install a different set of packages in parallel, and not have a conflict.

                    The stable-build and unstable-build stages also run parallel, and so on.

                    There’s also a riemann service running in the background, which is used for some of the test cases.

                    Shelling out to docker is easy, indeed. Orchestrating all of the above - far from it. It could be implemented in LambdaCD too, with wrapper functions/macros/whatever, but this is something I don’t want to deal with, something which Drone does for me out of the box, and it’s awesome.

                    1. 1

                      Thank you very much for such a detailed response! I’ll give it a thorough thinking because you seem to approach the problem from a completely different side from mine, so may be I need to rethink everything.

                      For example, instead of having every project define its own pipeline I’d rather have a single pipeline defined in my CD system which would work for all projects. So it will have logic along the lines “if it’s the master branch deploy to production, if it’s a PR, just run tests”. For it to work, all project will have to follow conventions on how they get built, how they get tested.

                      This is something I implemented successfully in the past with Jenkins, but now I’m looking for something more palatable.

                      1. 2

                        So it will have logic along the lines “if it’s the master branch deploy to production, if it’s a PR, just run tests”.

                        Yeah, that makes sense, and might be easier with a global pipeline. Perfectly doable with pipelines defined in the projects themselves too, even if that’s a bit more verbose:

                        pipeline:
                          tests:
                            image: some/image
                            commands:
                              - make tests
                        
                          deploy_production:
                            image: some/image
                            commands:
                              - deploy
                            when:
                              event: [push, tag]
                              branch: master
                        

                        With Drone, this will run tests for everything, PRs, pushing to any branch, tagging. The deploy_production stage will only be run for the master branch, and only on push and tag (this excludes PRs, pushes to any other branch, or tags that aren’t on master). Granted, with a single pipeline, this can be abstracted away, which is nicer. But per-project, in-source pipelines grant me the flexibility to change the pipeline along with the rest of the project. When adding new dependencies, or new testing steps, this is incredibly convenient, in my experience.

                        Mind you, there’s no solution that fits all use cases. If you mostly have uniform projects, a global pipeline has many benefits over per-project ones. For a company with well controlled, possibly internal repos, likewise. My case is neither of those: I have a bunch of random projects, each with their own nuances, there’s very little in common. For me, a per-project pipeline is more useful.

                        1. 1

                          If you mostly have uniform projects, a global pipeline has many benefits over per-project ones. For a company with well controlled, possibly internal repos, likewise. My case is neither of those: I have a bunch of random projects, each with their own nuances, there’s very little in common.

                          This sums it up nicely!

              1. 4

                I recently deployed Drone as my self-hosted CI solution, so I’m working on a blog post about how I arrived there, and also on a few plugins for it, to automate/simplify some of the things I need it to do.

                1. 2

                  Look forward to reading. Just looking at drone vs concourse vs gocd now.

                  1. 1

                    Post is now up, concourse & gocd included (though, neither in depth, because they failed to meet my requirements quite early :/).

                1. 4

                  Obviously, if we intend to make Wayland a replacement for X, we need to duplicate this functionality.

                  Perhaps a less than popular opinion, but: No, you don’t. If you want to replace A with B, you don’t need to replicate every mistake A made. Then B wouldn’t be much else than A’, with old bugs and new.

                  Don’t get me wrong, X’s network transparency might have been useful at some point - it isn’t now.

                  1. 8

                    Practice speaks otherwise, many people use it daily.

                    1. 1

                      That a lot of people use something daily doesn’t mean it is good, or needs to be replicated exactly. Running GUI programs remotely, and displaying them locally IS useful. It does not require network transparency, though.

                      1. 1

                        Require? Perhaps not. Makes things easier on some ways though.

                    2. 6

                      X’s network transparency might have been useful at some point - it isn’t now.

                      I use it 5+ days a week - it is still highly useful to me.

                      You’re right that fewer and fewer people know about it and use it - e.g. GTK has had a bug for many years that makes it necessary to stop Emacs after having opened a window remotely over X, and it’s not getting fixed, probably because X over network is not fashionable any more, so it isn’t prioritized.

                      1. 2

                        What is the advantage of X remoting over VNC / Remote Desktop?

                        I remember using it in the past and being confused that File -> Open wasn’t finding my local files, because it looks exactly like a local application.

                        I also remember that there were some bandwidth performance reasons. I don’t know if that is still applicable if applications use more of OpenGL and behave more like frame-buffers.

                        1. 7

                          Functional window management? If I resize a window to half screen, I don’t want to see only half of some remote window.

                          1. 2

                            Over a fast enough network, there’s no visible or functional difference between a local and remote X client. They get managed by the same wm, share the same copy/paste buffers, inherit the same X settings, and so on. Network transparency means just that: there’s no difference between local and remote X servers.

                            1. 1

                              It is faster, and you get just the window(s) of the application you start, integrated seamlessly in your desktop. You don’t have to worry about the other machine having a reasonable window manager setup, a similar resolution etc. etc.

                              In the old days people making browsers, e.g. Netscape, took care to make the application X networking friendly. That has changed, and using a browser over a VDSL connection is only useful in a pinch - but running something remote like (graphical) Emacs, I prefer to do over X.

                          2. 1

                            I’d like to see something in-between X and RDP. Window-awareness built-in, rather than assuming a single physical monitor, and window-size (and DPI) coming from the viewer would by themselves be a big start.

                            Edit: Ideally, pairing this with a standard format for streaming compressed textures, transforms, and compositor commands could solve a lot of problems, including the recent issue where we’re running up against physical bandwidth limitations trying to push pixels to multiple hi-def displays.

                            1. 2

                              FWIW I agree with you. It also so happens that something is coming soon enough .. https://github.com/letoram/arcan/wiki/Networking

                          1. 6

                            I blog about… things that interest me. Nowadays, that’s mostly keyboard firmware, but I used to write about Hylang, logging, and various other topics too. There will be non-keyboard content in the future too (bunch of Perl-related things are to be expected in the next few months).

                            1. 5

                              A few of you here assert that voting works fine as it is, on paper, counted by humans. It does not, and is stupidly easy to attack. All you have to do, is have a few members in the counting committee. Let me tell you a few recent examples from Hungary!

                              We had an election earlier this year, the ruling party got 2/3 of the seats in the parliament. The country was divided into voting districts, and each district had a committee. They handed out the papers, watched for fraud, and counted the votes once voting closed. There were quite a few districts where papers were handed out wrong, which, in some cases, rendered half the votes invalid. There were cases where votes were intentionally miscounted - but nothing happened, because the overall committee asserted that recounting wouldn’t change the outcome.

                              In previous elections, when my father was part of one of the committees, he witnessed another member of the committee failing to stamp a paper only when giving it to someone they knew would vote for a party that was an opponent of the party of the committee member. How many such “errors” were made? How many intentionally, how many by accident?

                              There are so many ways to attack paper-based voting, and all of this is being done today. It’s not just some theoretical fear. It’s not limited to my country - this is just where I have reliable information from. But even if we forget about intentional attacks, and only focus on accidents, I don’t think it is unreasonable to think that by the end of the day, when people get tired, they make mistakes. Forgetting a stamp, giving wrong papers, failing to check credentials are all things we’ve seen, all things that can influence the outcome.

                              Thing is, paper-based voting is not fine. It does not work well. It’s not secure, not anymore than voting machines. It’s simply something we are used to and perhaps unconsciously, ignore the disadvantages.

                              1. 2

                                “All you have to do, is have a few members in the counting committee.”

                                And with computers, you just need one person with physical or remote access to the device. So, computer-based method is still worse than human-based method.

                                1. 1

                                  I disagree. To tamper with a voting machine, you need one person with physical or remote access, and sufficient knowledge to carry out an attack. This is quite limiting.

                                  With paper-based voting with humans, if the committee members fail to correctly verify people, the system is easy to abuse. From my experience, I only need to persuade one specific member of the committee, to be a bit more lax about checking identities and whatnot. That’s not a high bar to jump, and you don’t need a person with special knowledge, just one at the right spot at the right time, and overlooking a fake ID is much easier than tampering with a machine physically. Stupidly easy to deny as well.

                                  (Yeah, if you have remote access to a voting machine, that’s going to be fucked up. But lets assume we’re not that incompetent, shall we.)

                                  1. 1

                                    “and sufficient knowledge to carry out an attack”

                                    You don’t need that. You just need a readily-available exploit plus instructions on how to use it. Someone else can develop it with a one-time, up-front cost. Easier if there’s only a few suppliers to target. That’s how the malware market works right now for desktops. For voting machines, that might be as simple as plugging USB sticks into specific devices or as “hard” as disassembling them to connect something to internal parts.

                                    “ if the committee members fail to correctly verify people, the system is easy to abuse”

                                    Which is why we have recounts with lots of people in the town checking paper votes. A failure mode of that was the accused getting to determine which pieces of paper they looked at. That’s an easy solution to fix.

                              1. 8

                                Speaking as a C programmer, this is a great tour of all the worst parts of C. No destructors, no generics, the preprocessor, conditional compilation, check, check, check. It just needs a section on autoconf to round things out.

                                It is often easier, and even more correct, to just create a macro which repeats the code for you.

                                A macro can be more correct?! This is new to me.

                                Perhaps the overhead of the abstract structure is also unacceptable..

                                Number of times this is likely to happen to you: exactly zero.

                                C function signatures are simple and easy to understand.

                                It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it, so makes certain lifetime expectations on it. Not one single piece of documentation I’ve seen in the last 5 years mentions this fact.

                                1. 4

                                  It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it

                                  Which system? I’m pretty sure OpenBSD doesn’t.

                                  https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L200

                                  https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L1156

                                  1. 2

                                    Linux (that’s the manpage I linked to above). This was before I discovered OpenBSD.

                                    Edit: I may be misremembering and maybe it was connect() that was the problem. It too seems fine on OpenBSD. Here’s my original eureka moment from 2011: https://github.com/akkartik/wart/commit/43366d75fbfe1. I know it’s not specific to that project because @smalina and I tried it again with a simple C program in 2016. Again on Linux.

                                      1. 1

                                        Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                        I’ll dig up a simple test program later today.

                                        1. 2

                                          Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                          bind and connect are syscalls, libc would only have a stub doing the syscall if anything at all since they are not part of the standard library.

                                  2. 2

                                    Perhaps the overhead of the abstract structure is also unacceptable..

                                    Number of times this is likely to happen to you: exactly zero.

                                    I have to worry about my embedded C code being too big for the stack as it is.

                                    1. 1

                                      Certainly. But is the author concerned with embedded programming? He seems to be speaking of “systems programming” in general.

                                      Also, I interpreted that section as being about time overhead (since he’s talking about the optimizer eliminating it). Even in embedded situations, have you lately found the time overheads concerning?

                                      1. 5

                                        I work with 8-bit AVR MCUs. I often found myself having to cut corners and avoid certain abstractions, because that would have resulted either in larger or slower binaries, or would have used significantly more RAM. On an Atmega32U4, resources are very limited.

                                    2. 1

                                      Perhaps the overhead of the abstract structure is also unacceptable..

                                      Number of times this is likely to happen to you: exactly zero.

                                      Many times, actually. I see FSM_TIME. Hmm … seconds? Milliseconds? No indication of the unit. And what is FSM_TIME? Oh … it’s SYS_TIME. How cute. How is that defined? Oh, it depends upon operating system and the program being compiled. Lovely abstraction there. And I’m still trying to figure out the whole FSM abstraction (which stands for “Finite State Machine”). It’s bad enough to see a function written as:

                                      static FSM_STATE(state_foobar)
                                      {
                                      ...
                                      }
                                      

                                      and then wondering where the hell the variable context is defined! (a clue—it’s in the FSM_STATE() macro).

                                      And that bind() issue is really puzzling, since that haven’t been my experience at all, and I work with Linux, Solaris, and Mac OS-X currently.

                                      1. 1

                                        I agree that excessive abstractions can hinder understanding. I’ve said this before myself: https://news.ycombinator.com/item?id=13570092. But OP is talking about performance overhead.

                                        I’m still trying to reproduce the bind() issue. Of course when I want it to fail it doesn’t.

                                    1. 28

                                      That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).

                                      Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

                                      1. 19

                                        Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

                                        Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524

                                        1. 13

                                          I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.

                                          1. 8

                                            Medium started with an illusion of simplicity and gradually got more and more complex.

                                            1. 3

                                              I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.

                                              I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.

                                          2. 3

                                            That is a very reductionist view of what people use the web for.

                                            I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.

                                            1. 19

                                              Useful.

                                              algernon hides

                                              1. 5

                                                YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.

                                                I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.

                                                Minimalist Slack has been around for decades, it’s called IRC.

                                                1. 2

                                                  It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.

                                                  Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.

                                                  1. 2

                                                    The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.

                                                  2. 2

                                                    That seems to be of extreme bad faith though.

                                                    1. 11

                                                      In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.

                                                      1. 2

                                                        Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.

                                                        1. 3

                                                          Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.

                                                          The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.

                                                          1. 3

                                                            The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly

                                                            Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?

                                                            1. 4

                                                              They are, in fact, downloading an application and running it locally.

                                                              That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.

                                                              1. 3

                                                                As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.

                                                                1. 1

                                                                  A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.

                                                                  The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.

                                                                  1. 3

                                                                    JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language

                                                                    (a == b) !== (a === b)

                                                                    but only some times…

                                                                    1. 3

                                                                      Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.

                                                                      (And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)

                                                                      Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).

                                                              2. 1

                                                                Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.

                                                                The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.

                                                              3. 1

                                                                I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.

                                                                1. 4

                                                                  There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.

                                                                  If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).

                                                                  All of these have well-maintained bindings to all popular scripting languages.

                                                                  1. 1

                                                                    QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.

                                                                    The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.

                                                                    1. 5

                                                                      Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).

                                                                      Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.

                                                                      I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.

                                                                      Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.

                                                                      (But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)

                                                                    2. 1

                                                                      I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.

                                                                      I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.

                                                                      I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).

                                                                      The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.

                                                                      1. 3

                                                                        I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.

                                                                        TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.

                                                                        When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.

                                                                        1. 5

                                                                          It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.

                                                                          It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.

                                                                          1. 2

                                                                            Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.

                                                                            If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.

                                                        2. 1

                                                          Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.

                                                          Google docs would be a desktop program.

                                                          Slack would be IRC.

                                                          1. 1

                                                            What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.

                                                            Google docs would be a desktop program.

                                                            This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.

                                                      1. 3

                                                        If only it had a sane license… sigh

                                                        There are perfectly good licenses that do pretty much what the author seems to desire, but which are also legally sound. Inventing your own license, even if it’s just a few words, is bad practice (unless you’re a lawyer, but inventing another license is still questionable practice even then :P).

                                                        1. 3

                                                          There was even a GitHub issue where someone tried to make them see how bad that was and that still didn’t work

                                                          1. 1

                                                            It’s certainly not universally agreed that you need those warranty disclaimers. Generally warranty disclaimers don’t work anyway. You can’t sign away your right to protection under New Zealand’s consumer guarantees act, and I imagine it’s similar in other countries: there’d be no point having warranties if they could just be ignored with a bit of caps lock.

                                                            More likely is that there’s just obviously no warranty because it’s not a product, it’s open source software.

                                                            1. 1

                                                              Oh wow, this was educational.

                                                              /me goes and fixes some of his WTFPL-licensed stuff

                                                              Thanks!

                                                              1. 1

                                                                You’re welcome mate, and don’t worry. I think everyone had one of those “oh no what am I doing with all my projects” panic moments at some point

                                                          1. 5

                                                            @algernon, thanks for your careful reply to my blog post. You’re right that I overlooked the Github API. I just haven’t yet used tools that hook into it. What are some of your favorite tools?

                                                            I think it’s still worth comparing the pros/cons of their proprietary API and the number and maturity of its tools with the ecosystem around email, but I would like to get better educated about the Github tools.

                                                            1. 6

                                                              I live in Emacs, so https://github.com/vermiculus/magithub, https://github.com/sigma/magit-gh-pulls and https://magit.vc/ itself are my primary tools. They are the most powerful git tools I had the pleasure to work with.

                                                              1. 3

                                                                I just haven’t yet used tools that hook into it. What are some of your favorite tools?

                                                                I can highly recommend githubs own hub: https://github.com/github/hub, https://hub.github.com/hub.1.html

                                                              1. 9

                                                                It’s interesting because the author is not thoughtlessly in favour of GitHub, but I think that his rebuttals are incomplete and ultimate his point is incorrect.

                                                                Code changes are proposed by making another Github-hosted project (a “fork”), modifying a remote branch, and using the GUI to open a pull request from your branch to the original.
                                                                

                                                                That is a bit of a simplification, and completely ignores the fact that GitHub has an API. So does GitLab and most other similar offerings. You can work with GitHub, use all of its features, without ever having to open a browser. Ok, maybe once, to create an OAuth token.

                                                                Whether using the web UI or the API, one is still performing the quoted steps (which notably never mention the browser).

                                                                A certain level of discussion is useful, but once it splits up into longer sub-threads, it becomes way too easy to loose sight of the whole picture.

                                                                That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.

                                                                Another nice effect is that other people can carry the patch to the finish line if the original author stops caring or being involved.
                                                                

                                                                On GitHub, if the original proposer goes MIA, anyone can take the pull request, update it, and push it forward. Just like on a mailing list. The difference is that this’ll start a new pull request, which is not unreasonable: a lot of time can pass between the original request, and someone else taking up the mantle. In that case, it can be a good idea to start a new thread, instead of resurrecting an ancient one.

                                                                What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?

                                                                While web apps deliver a centrally-controlled user interface, native applications allow each person to customize their own experience.
                                                                

                                                                GitHub has an API. There are plenty of IDE integrations. You can customize your user-experience just as much as with an email-driven workflow. You are not limited to the GitHub UI.

                                                                This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.

                                                                Granted, it is not an RFC, and you are at the mercy of GitHub to continue providing it. But then, you are often at the mercy of your email provider too.

                                                                There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time. The number of tools which exist for handling offline mail archives is huge; the number of tools for dealing with offline GitHub project archives is … small. Indeed, until today I’d have expected it to be almost zero.

                                                                Github can legally delete projects or users with or without cause.
                                                                

                                                                Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.

                                                                But of course my own maildir on my own machine will remain.

                                                                I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.

                                                                1. 8

                                                                  We’ve spent about half a century refining the email interface: it’s pretty good.

                                                                  We’ve spent about half a century refining the email interface. Very good clients exist…. but most people still use GMail regardless.

                                                                  1. 6

                                                                    That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.

                                                                    I have never seen an email client that presented threaded discussions well. Even if such a client exists, mailing-list discussions are always a mess of incomplete quoting. And how could they not be, when the whole mailing list model is: denormalise and flatten all your structured data into a stream of 7-bit ASCII text, send a copy to every subscriber, and then hope that they’re able to successfully guess what the original structured data was.

                                                                    You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process. The rest of the world shouldn’t be held back because some people use poor tools indeed - that means not insisting that all code discussion has to happen via flat streams of 7-bit ASCII just because some people’s tools can’t handle anything more structured.

                                                                    I agree with there being value in multipolar standards and decentralization. Between a structured but centralised API and an unstructured one with a broader ecosystem, well, there are arguments for both sides. But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).

                                                                    1. 1

                                                                      You could maybe make a case for using an NNTP newsgroup for project discussion

                                                                      While I love NNTP, the data format is identical to email, so if you think a newsgroup can have nice threads, then so could a mailing list. They’re just different network distribution protocols for the same data format.

                                                                      accommodate PRs in a structured form

                                                                      Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.

                                                                      1. 1

                                                                        Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.

                                                                        Sure, but I’d expect the people who complain about github would also complain about the use of MIME email.

                                                                      2. 1

                                                                        You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process.

                                                                        Not really — Gnus has offered a newsgroup-reader interface to email for decades, and Gmane has offered actual NNTP newsgroups for mailing lists for 16 years.

                                                                        But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).

                                                                        I’m not insisting on email! It’s decent but not great. What I would insist on, were I insisting on anything, is real decentralisation: issues should be inside the repo itself, and PRs should be in some sort of pararepo structure, so that nothing more than a file server (whether HTTP or otherwise) is required.

                                                                      3. 4

                                                                        …the new PR will lose the history of the old PR.

                                                                        Why not just link to it?

                                                                        This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.

                                                                        That strikes me as disingenuous as well. Email is older. Of course it has more clients, with varying degrees of maturity & ease of use. That has no bearing on whether the GitHub API or an email-based workflow is a better solution. Your point is taken; the GitHub API is not yet “Just Add Water!”-tier. But the clients and maturity will come in time, as they do with all well-used interfaces.

                                                                        Github can legally delete projects or users with or without cause.

                                                                        Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.

                                                                        But of course my own maildir on my own machine will remain.

                                                                        Meanwhile, the local copy of my git repo will remain.

                                                                        I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.

                                                                        I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.

                                                                        1. 2

                                                                          Meanwhile, the local copy of my git repo will remain.

                                                                          Which contains no issues, no discussion, no PRs — just the code.

                                                                          I’d like to see a standard for including all that inside or around a repo, somehow (PRs can’t really live in a repo, but maybe they can live in some sort of meta- or para-repo).

                                                                          I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.

                                                                          Why on earth would I use someone else’s definition? I’m arguing for my position, not someone else’s. And I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.

                                                                          1. 1

                                                                            Meanwhile, the local copy of my git repo will remain.

                                                                            Which contains no issues, no discussion, no PRs — just the code.

                                                                            Doesn’t that strike you as a shortcoming of Git, rather than GitHub? I think this may be what you are getting at.

                                                                            Why on earth would I use someone else’s definition?

                                                                            Because there are other software developers, too.

                                                                            I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.

                                                                            I never said anything about reliance. That being said, I think the availability of a good, idiomatic web interface is a must nowadays where ease-of-use is concerned. If you don’t agree with that, then you can’t possibly understand why GitHub is so popular.

                                                                        2. 3

                                                                          (author here)

                                                                          Whether using the web UI or the API, one is still performing the quoted steps

                                                                          Indeed, but the difference between using the UI and the API, is that the latter is much easier to build tooling around. For example, to start contributing to a random GitHub repo, I need to do the following steps:

                                                                          • Tell my Emacs to clone & fork it. This is as simple as invoking a shortcut, and typing or pasting the upstream repo name. The integration in the background will do the necessary forking if needed. Or I can opt not to fork, in which case it will do it automatically later.
                                                                          • Do the changes I want to do.
                                                                          • Tell Emacs to open a pull request. It will commit my changes (and prompt for a commit message), create the branch, and open a PR with the same commit message. I can use a different shortcut to have more control over what my IDE does, name the branch, or create multiple commits, etc.

                                                                          It is a heavily customised workflow, something that suits me. Yet, it still uses GitHub under the hood, and I’m not limited to what the web UI has to offer. The API can be built upon, it can be enriched, or customised to fit one’s desires and habits. The difference in what I need to do to get the same steps done differs drastically. Yes, my tooling does the same stuff under the hood - but that’s the point, it hides those detail from me!

                                                                          (which notably never mention the browser).

                                                                          Near the end of the article I replied to:

                                                                          “Tools can work together, rather than having a GUI locked in the browser.”

                                                                          From this, I concluded that the article was written with the GitHub web UI in mind. Because the API composes very well with other tools, and you are not locked into a browser.

                                                                          That’s typically the result of a poor email client.

                                                                          I used Gnus in the past, it’s a great client. But my issue with long threads and lots of branches is not that displaying them is an issue - it isn’t. Modern clients can do an amazing job making sense of them. My problem is the cognitive load of having to keep at least some of it in mind. Tools can help with that, but I can only scale so far. There are people smarter than I who can deal with these threads, I prefer not to.

                                                                          What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?

                                                                          The new PR can still reference the old PR, which is not unlike having an In-Reply-To header that points to a message not in one’s archive. It’s possible to build tooling on top of this that would go and fetch the original PR for context.

                                                                          Mind you, I can imagine a few ways the GitHub workflow could be improved, that would make this kind of thing easier, and less likely to loose history. I’d still rather have an API than e-mail, though.

                                                                          This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.

                                                                          Refining? You mean that most MUAs look just like they did thirty years ago? There were many quality of life improvements, sure. Lots of work to make them play better with other tools (this is mostly true for tty clients and Emacs MUAs, as far as I saw). But one of the most wide-spread MUA (gmail) is absolutely terrible when it comes to working with code and mailing lists. Same goes for Outlook. The email interface story is quite sad :/

                                                                          There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time.

                                                                          Yeah, there are more options to back up your mail. It has been around longer too, so that’s to be expected. Email is also a larger market. But there are a reasonable number of tools to help backing up one’s GitHub too. And one always makes backups anyway, just in case, right?

                                                                          So yeah, there is a difference. But both are doable right now, with tools that already exist, and as such, I don’t see the need for such a fuss about it.

                                                                          I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.

                                                                          I don’t think GitHub is anywhere near perfect, especially not when we consider that it is proprietary software. It being centralised does have advantages however (discoverability, not needing to register/subscribe/whatever to N+1 places, and so on).

                                                                        1. 2
                                                                          What interviewing should be, since nobody seems to know how to hire anyone in this field

                                                                          Are there any companies that follow these guidelines? Or is every place hell to get into?

                                                                          1. 3

                                                                            I’ve met some who did this, and it worked remarkably well for a while, but at some point, stopped being scaleable. For this to work, you need your team(s) to be on board, and have the resources to work with potential hires. There will be quite a lot who fail miserably after a day or two, thus wasting the teams time. There will be much fewer, who ace through it and make the team go “whoa!” to even the balance. Sooner or later, motivation declines, and the few days you bring in potential hires becomes a drag for everyone.

                                                                            Unless, of course, you only need to hire a few people, and then not hire for a long period of time. That works for smaller shops, but once you have 1000+ employees, there will always be a pretty constant flux, new people coming and leaving every couple of months. That’s frequent enough to disrupt the teams that are involved in the hiring process.

                                                                            At least, that’s what I saw. At one place, I was hired like this, I took part of similar processes, but it became old, and annoying very fast. It took away time from doing our actual jobs, which increased frustration and stress. :/

                                                                          1. 1

                                                                            The website seems to have been taken down since I get a 403, maybe the author didn’t like being linked to Lobste.rs or they’re shy.

                                                                            1. 1

                                                                              Works for me from here.

                                                                              1. 1

                                                                                Curiously it works from my phone.

                                                                                I guess my ip is blocked or something? Weird …

                                                                                1. 2

                                                                                  Their hosting provider applies blocks rather … aggressively.

                                                                            1. 5

                                                                              Git via email sounds like hell to me. I’ve tried to find some articles that evangelize the practice of doing software development tasks through email, but to no avail. What is the allure of approaches like this? What does it add to just using git by itself?

                                                                              1. 6

                                                                                I tried to collect the pros and cons in this article: https://begriffs.com/posts/2018-06-05-mailing-list-vs-github.html

                                                                                1. 3

                                                                                  I also spoke about this at length in a previous article:

                                                                                  https://drewdevault.com/2018/07/02/Email-driven-git.html

                                                                                  1. 3

                                                                                    While my general experience with git email is bad (it’s annoying to set up, especially in older versions and I don’t like it’s interface too much), my experience of interaction with projects that do this was generally good. You send a patch, you get review, you send a new, self-contained patch, attached to the same thread… etc, in parallel to the rest of the project discussion. It’s a different flavour, but with a project that is used to the flow, it can really be quite pleasing.

                                                                                    1. 2

                                                                                      What does it add to just using git by itself?

                                                                                      I think the selling point is precisely that it doesn’t add anything else. Creating a PR involves more steps and context changes than git-format-patch git-send-mail.

                                                                                      I have little experience using the mailing list flow, but when I had to do so (because the project required it) I found it very easy to use and better for code reviews.

                                                                                      1. 1

                                                                                        Creating a PR involves more steps and context changes than git-format-patch git-send-mail.

                                                                                        I’m not sure I understand. What steps are removed that would otherwise be required?

                                                                                        1. 4

                                                                                          Simply, it’s “create a fork and push your changes to it”. But also consider that it’s…

                                                                                          1. Open a web browser
                                                                                          2. Register for a GitHub account
                                                                                          3. Confirm your email address
                                                                                          4. Fork the repository
                                                                                          5. Push your changes to the fork
                                                                                          6. Open a pull request

                                                                                          In this workflow, you switched between your terminal, browser, mail client, browser, terminal, and browser before the pull request was sent.

                                                                                          With git send-email, it’s literally just git send-email HEAD^ to send the last commit, then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README. You can skip the second step next time by doing git config sendemail.to someone@example.org. Bonus: no proprietary software involved in the send-email workflow.

                                                                                          1. 3

                                                                                            Also github pull requests involve more git machinery than is necessary. Most people, when they open a PR, choose to make a feature branch in their fork from which to send the PR, rather than sending from master. The PR exposes the sender’s local branching choices unnecessarily. Then, for each PR, github creates more refs on the remote, so you end up having lots stuff laying around (try running git ls-remote | grep pull).

                                                                                            Compare that with the idea that if you want to send a code change, just mail the project a description (diff) of the change. We all must be slightly brainwashed when that doesn’t seem like the most obvious thing to do.

                                                                                            In fact the sender wouldn’t even have to use git at all, they could download a recent code tarball (no need to clone the whole project history), make changes and run the diff command… Might not be a great way to do things for ongoing contributions, but works for a quick fix.

                                                                                            Of course opening the PR is just the start of the future stymied github interactions.

                                                                                            1. 3

                                                                                              In my case I tend to also perform steps:

                                                                                              • 3.1 Clone project
                                                                                              • 3.2 Use project for a while
                                                                                              • 3.3 Make some local changes
                                                                                              • 3.4 Commit those changes to local clone
                                                                                              • 3.5 Try to open pull request
                                                                                              • 3.6 Realise GitHub requires me to make a fork of the original repo
                                                                                              • 4.1 Read man git-remote to see how to point my local clone (with the changes) to my GitHub fork
                                                                                              • 4.2 Run relevant git remote commands
                                                                                              • 4.3 Read man git-push to see how to send my changes to the fork rather than the original repo
                                                                                              1. 2

                                                                                                To send email, you also have to have an email address. If we are doing a fair comparison, that should be noted as well. Granted, it is much more likely that someone has an email address than a GitHub account, but the wonderful thing about both is that you only have to set them up once. So for this reason, it would be a bit more fair if the list above started from step four.

                                                                                                Now, if I have GitHub integration in my IDE (which is not an unreasonable thing to assume), then I do not need to leave the IDE at all, and I can fork, push, and open a PR (case in point, Emacs and Magithub can do this). I can also do all of this on GitHub, never leaving my browser. I don’t have to figure out where to send an email, because it automatically sends the PR to the repo I forked from. I don’t even need to open a shell and deal with the commandline. I can do everything with shortcuts and a little bit of mousing around, in both the IDE and the browser case.

                                                                                                Even as someone who is familiar with the commandline, and is sufficiently savvy with e-mail (at one point I was subscribed to debian-bugs-dist AND LKML, among other things, and had no problem filtering out the few bits I needed), I’d rather work without having to send patches, using Magit + magithub instead. It’s better integrated, hides uninteresting details from me, so I can get done with my work faster. It works out of the box. git send-email does not, it requires a whole lot of set up per repo.

                                                                                                Furthermore, with e-mail, you have to handle replies, have a firm grip on your inbox. That’s an art on its own. No such issue with GitHub.

                                                                                                With this in mind, the remaining benefit of git send-email is that it does not involve a proprietary platform. For a whole lot of people, that’s not an interesting property.

                                                                                                1. 2

                                                                                                  To send email, you also have to have an email address. If we are doing a fair comparison, that should be noted as well.

                                                                                                  I did note this:

                                                                                                  then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README

                                                                                                  Magit + magithub […] works out of the box

                                                                                                  Only if you have a GitHub account and authorize it. Which is a similar amount of setup, if not more, compared to setting up git send-email with your SMTP info.

                                                                                                  git send-email does not, it requires a whole lot of set up per repo

                                                                                                  You only have to put your SMTP creds in once. Then all you have to do per-repo is decide where to send the email to. How is this more work than making a GitHub fork? All of this works without installing extra software to boot.

                                                                                                  1. 3

                                                                                                    then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README

                                                                                                    With GitHub, I do not need to obtain any email address, or dig it out of a README. It sets things up automatically for me so I can just open a PR, and have everything filled out.

                                                                                                    Only if you have a GitHub account and authorize it. Which is a similar amount of setup, if not more, compared to setting up git send-email with your SMTP info.

                                                                                                    Lets compare:

                                                                                                    e-mail:

                                                                                                    1. Clone repository
                                                                                                    2. Do my business
                                                                                                    3. Figure out where to send e-mail to.
                                                                                                    4. git config so I won’t have to figure it out ever again.
                                                                                                    5. git send-email

                                                                                                    magithub:

                                                                                                    1. clone repo
                                                                                                    2. do my business
                                                                                                    3. fork the repo
                                                                                                    4. push changes
                                                                                                    5. open PR

                                                                                                    The first two steps are pretty much the same, both are easily assisted by my IDE. The difference starts from step 3, because my IDE can’t figure out for me where to send the email. That’s a manual step. I can create a helper that makes it easier for me to do step 4 once I have the address, but that’s about it. For the magithub case, step 3 is SPC g h f; step 4 SPC g s p u RET; step 5 SPC g h p, then edit the cover letter, and , c (or C-c) to finish it up and send it. You can use whatever shortcuts you set up, these are mine. Nothing to figure out manually, all automated. All I have to do is invoke a shortcut, edit the cover letter (the PR’s body), and I’m done.

                                                                                                    I can even automate the clone + fork part, and combine push changes + open PR, so it becomes:

                                                                                                    1. fork & clone repo (or clone if already forked)
                                                                                                    2. do my business
                                                                                                    3. push changes & open PR

                                                                                                    Can’t do such automation with e-mailed patches.

                                                                                                    I’m not counting GitHub account authorization, because that’s about the same complexity as configuring auth for my SMTP, and both have to be done only once. I’m also not counting registering a GitHub account, because that only needs to be done once, and you can use it forever, for any GitHub-hosted repo, and takes about a minute, a miniscule amount compared to doing actual development.

                                                                                                    Again, the main difference is that for the e-mail workflow, I have to figure out the e-mail address, a process that’s longer than forking the repo and pushing my changes, and a process that can’t be automated to the point of requiring a single shortcut.

                                                                                                    Then all you have to do per-repo is decide where to send the email to. How is this more work than making a GitHub fork?

                                                                                                    Creating a GitHub fork is literally one shortcut, or one click in the browser. If you can’t see how that is considerably easier than digging out email addresses from free-form text, then I have nothing more to say.

                                                                                                    And we haven’t talked about receiving comments on the email yet, or accepting patches. Oh boy.

                                                                                                    1. 2

                                                                                                      With GitHub, I do not need to obtain any email address, or dig it out of a README. It sets things up automatically for me so I can just open a PR, and have everything filled out.

                                                                                                      You already had to read the README to figure out how to compile it, and check if there was a style guide, and review guidelines for contribution…

                                                                                                      Lets compare

                                                                                                      Note that your magithub process is the same number of steps but none of them have “so I won’t have to figure it out ever again”, which on the email process actually eliminates two of your steps.

                                                                                                      Your magithub workflow looks much more complicated, and you could use keybindings to plug into send-email as well.

                                                                                                      Can’t do such automation with e-mailed patches

                                                                                                      You can do this and even more!

                                                                                                      1. 2

                                                                                                        You already had to read the README to figure out how to compile it, and check if there was a style guide, and review guidelines for contribution…

                                                                                                        I might have read the README, or skimmed it. But not to figure out how to compile - most languages have a reasonably standardised way of doing things. If a particular project does not follow that, I will most likely just stop caring unless I really, really need to compile it for one reason or another. For style, I hope they have tooling to enforce it, or at least check it, so I don’t have to read long documents and keep it in my head. I have more important things to store there than things that should be automated.

                                                                                                        I would likely read the contributing guidelines, but I won’t memorize it, and I certainly won’t try to remember an e-mail address. I might remember where to find it, but it will still be a manual process. Not a terribly long process, but noticeably longer than not having to do it at all.

                                                                                                        Note that your magithub process is the same number of steps but none of them have “so I won’t have to figure it out ever again”, which on the email process actually eliminates two of your steps.

                                                                                                        Because there’s nothing for me to figure out at all, ever (apart from what repo to clone & fork, but that’s a common step between the two workflows).

                                                                                                        Your magithub workflow looks much more complicated

                                                                                                        How is it more complicated? Clone, work, fork, push, open PR (or clone+fork, work, push+PR), of which all but “work” is heavily assisted. None of it requires me to look anything up, anywhere.

                                                                                                        and you could use keybindings to plug into send-email as well.

                                                                                                        And I do, when I’m dealing with projects that use an e-mail workflow. It’s not about shortcuts, but what can be automated, what the IDE can do instead of requiring me to do it.

                                                                                                        You can do this and even more!

                                                                                                        You can, if you can extract the address to send patches to automatically. You can build something that does that, but then the automation is tied to that platform, just like the PRs are tied to GitHub/GitLab/whatever.

                                                                                                        And again, this is just about sending a patch/opening a PR. There’s so much more PRs provide than that. Some of that, you can do with e-mail. Most of it, you can build on top of e-mail. But once you build something on top of e-mail, you no longer have an e-mail workflow, you have a different platform with which you can interact via e-mail. Think issues, labels for them, reviews (with approvals, rejection, etc - all of which must be discoverable by programs reliably), new commits, rebases and whatnot… yeah, you can build all of this on top of e-mail, and provide a web UI or an API or tools or whatever to present the current state (or any prior state). But then you built a platform which requires special tooling to use to its full potential, and you’re not much better than GitHub. You might build free software, but then there’s GitLab, Gitea, Gogs and a whole lot of others which do many of these things already, and are almost as easy to use as GitHub.

                                                                                                        I’ve worked with patches sent via e-mail quite a bit in the past. One can make it work, but it requires a lot of careful thought and setup to make it convenient. I’ll give a few examples!

                                                                                                        With GitHub and the like, it is reasonably easy to have an overview of open pull requests, without subscribing to a mailing list, or browsing archives. An open PR list is much easier to glance at and have a rough idea than a mailing list. PRs can have labels to help in figuring out what part of the repo they touch, or what state they are in. They can have CI states attached. At a glance, you get a whole lot of information. With a mailing list, you don’t have that. You can build something on top of e-mail that gives you a similar overview, but then you are not using e-mail only, and will need special tooling to process the information further (eg, to limit open PRs to those that need a review, for example).

                                                                                                        With GitHub and the like, you can subscribe to issues and pull requests, and you’ll get notifcations about those and those alone. With a mailing list, you rarely have that option, and must do filtering on your own, and hope that there’s a reasonable convention that allows you to do so reliably.

                                                                                                        There’s a whole lot of other things that these tools provide over plain patches over email. Like I said before, most - if not all - of that can be built on top of e-mail, but to achieve the same level of convenience, you will end up with an API that isn’t e-mail. And then you have Yet Another Platform.

                                                                                                        1. 2

                                                                                                          How is it more complicated? Clone, work, fork, push, open PR (or clone+fork, work, push+PR)

                                                                                                          Because the work for the send-email approach is: clone, work, git send-email. This is fewer steps and is therefore less complicated. Not to mention that as projects become more decentralized as they move away from GItHub, the registration process doesn’t go away and starts recurring for every new forge or instance of a forge you work with.

                                                                                                          But once you build something on top of e-mail, you no longer have an e-mail workflow, you have a different platform with which you can interact via e-mail. Think issues, labels for them, reviews (with approvals, rejection, etc - all of which must be discoverable by programs reliably), new commits, rebases and whatnot…

                                                                                                          Yes, that’s what I’m advocating for.

                                                                                                          But then you built a platform which requires special tooling to use to its full potential, and you’re not much better than GitHub

                                                                                                          No, I’m proposing all of this can be done with a very similar UX on the web and be driven by email underneath.

                                                                                                          PRs can have labels to help in figuring out what part of the repo they touch, or what state they are in. They can have CI states attached.

                                                                                                          So let’s add that to mailing list software. I explicitly acknoweldge the shortcomings of mail today and posit that we should invest in these areas rather than rebuilding from scratch without an email-based foundation. But none of the problems you bring up are problems that can’t be solved with email. They’re just problems which haven’t been solved with emails. Problems I am solving with emails. Read my article!

                                                                                                          but then you are not using e-mail only, and will need special tooling to process the information further (eg, to limit open PRs to those that need a review, for example).

                                                                                                          So what? Why is this even a little bit of a problem? What the hell?

                                                                                                          With GitHub and the like, you can subscribe to issues and pull requests, and you’ll get notifcations about those and those alone.

                                                                                                          You can’t subscribe to issues or pull requests, you have to subscribe to both, plus new releases. Mailing lists are more flexible in this respect. There are often separate thing-announce, thing-discuss (or thing-users), and thing-dev mailing lists which you can subscribe to separately depending on what you want to hear about.

                                                                                                          Like I said before, most - if not all - of that can be built on top of e-mail, but to achieve the same level of convenience, you will end up with an API that isn’t e-mail.

                                                                                                          No, you won’t. That’s simply not how this works.

                                                                                                          Look, we’re just not on the same wavelength here. I’m not going to continue diving into this ditch of meaningless argument. You keep using whatever you’re comfortable with.

                                                                                                        2. 2

                                                                                                          Your magithub workflow looks much more complicated, and you could use keybindings to plug into send-email as well.

                                                                                                          I just remembered a good illustration that might explain my stance a bit better. My wife, a garden engineer, was able to contribute to a few projects during Hacktoberfest (three years in a row now), with only a browser and GitHub for Windows at hand. She couldn’t have done it via e-mail, because the only way she can use her email is via her smart phone, or GMail’s web interface. She knows nothing else, and is not interested in learning anything else either, because these perfectly suit her needs. Yet, she was able to discover projects (by looking at what I contributed to, or have starred), search for TODOs or look at existing issues, fork a repo, write some documentation, and submit a PR. She could have done it all from a web browser, but I set up GitHub for Windows for her - in hindsight, I should have let her just use the browser. We’ll do that this year.

                                                                                                          She doesn’t know how to use the command-line, has no desire, and no need to learn it. Her email handling is… something that makes me want to scream (no filters, no labels, no folders - one big, unorganized inbox), but it suits her, and as such, she has no desire to change it in any way.

                                                                                                          She doesn’t know Emacs, or any IDE for that matter, and has no real need for them, either.

                                                                                                          Yet, her contributions were well received, they were useful, and some are still in place today, unchanged. Why? Because GitHub made it easy for newcomers to contribute. They made it so that contributing does not require them to use anything else but GitHub. This is a pretty strong selling point for many people, that using GitHub (and similar solutions) does not affect any other tool or service they use. It’s distinct, and separate.

                                                                                                          1. 2

                                                                                                            Not all projects have work for unskilled contributors. Why should we cater to them (who on the whole do <1% of the work) at the expense of the skilled contributors? Particularly the most senior contributors, who in practice do 90% of the work. We don’t build houses with toy hammers so that your grandma can contribute.

                                                                                                            I’m not saying we shouldn’t make tools which accomodate everyone. I’m saying we should make tools that accomodate skilled engineers and build simpler tools on top of that. Thus, the skilled engineers are not slowed down and the greener contributors can still get work done. Then, there’s a path for newer users to become more exposed to more powerful tools and more smoothly become senior contributors themselves.

                                                                                                            You need to get this point down if you want me to keep entertaining a discussion with you: you can build the same easy-to-use UX and drive it with email.

                                                                                                            1. 3

                                                                                                              I’m not saying we shouldn’t make tools which accomodate everyone. I’m saying we should make tools that accomodate skilled engineers and build simpler tools on top of that.

                                                                                                              I was under the impression that git + GitHub are exactly these. Git and git send-email for those who prefer those style, GitHub for those who prefer that. The skilled engineers can use the powerful tools they have, while those with a different skillset can use GitHub. All you need is willingness to work with both.

                                                                                                              you can build the same easy-to-use UX and drive it with email.

                                                                                                              I’m not questioning you can build something very similar, but as long as e-mail is the only driving power behind it, there will be plenty of people who will turn to some other tool. Because filtering email is something you and I can easily do, but many can’t, or aren’t willing to. Not when there are alternatives that don’t require them to do extra work.

                                                                                                              Mind you, I consider myself a skilled engineer, and I mainly use GitHub/GitLab APIs, because I don’t have to filter e-mail, nor parse the info in them, the API serves me data I can use in an easier manner. From an integrator point of view, this is golden. If, say, an Emacs integration starts with “Set up your email so mail with these properties are routed here”, that’s not a good user experience. And no, I don’t want to use my MUA to work with git, because magit is a much better, much more powerful tool for that, and I value my productivity.

                                                                                                              1. 1

                                                                                                                I’m not questioning you can build something very similar, but as long as e-mail is the only driving power behind it, there will be plenty of people who will turn to some other tool.

                                                                                                                I’m pretty sure the whole point would be that the “shiny UI” tool would not expose email to the user at all – so the “plenty of people” wouldn’t leave because they wouldn’t know the difference.

                                                                                                                1. 0

                                                                                                                  So…. pretty much GitHub/GitLab/Gitea 2.0, but with the added ability to open PRs by email (to cater to that workflow), and a much less reliable foundation?

                                                                                                                  Sure. What could possibly go wrong.

                                                                                                  2. 1

                                                                                                    I don’t think you can count signing up for GitHub if you’re not counting signing up for email.

                                                                                                    If you’re using hub, it’s just hub pull-request. No context switching

                                                                                                    1. 2

                                                                                                      If you’re counting signing up for email you have to count that for GitHub, too, since they require an email address to sign up with.

                                                                                                  3. 1

                                                                                                    Using GitHub requires pushing to different repository and then opening the PR on the GitHub Interface, which is a context change. The git-send-mail would be equivalent to sending the PR.

                                                                                                    git-send-email is only one step, akin to opening the PR, no need to push to a remote repository. And from the comfort of your development environment. (Emacs in my case)

                                                                                              1. 4

                                                                                                This article is full of great advice. Thank you for sharing!

                                                                                                I get to be happy about an article about logging, for a change \o/

                                                                                                1. 3

                                                                                                  I first read this article by the pool in Jamaica and then proceeded to reread it several times. Eventually it made it’s way to a semi-permanent tab for a period of months because I love it so much.

                                                                                                1. 12

                                                                                                  My daily driver is a Keyboardio Model01, and am using an ErgoDox EZ with Gateron Browns at work (an old one, not the Shine). Both of them run Kaleidoscope, the firmware originally designed for the Model01. I uhh… customised the firmware a tiny bit. Just small things. It’s not like I’m using over a dozen plugins on my Model01, nah, why would I? :p

                                                                                                  Anyhow, my Model01 and ErgoDox sketches are all open source, and the former has a bit of documentation about how it looks and works.

                                                                                                  I also own a Shortcut prototype, and am looking forward to laying my hands on a Raise at some point. I’ll also build a trackball for myself somewhere down the road, but… that will be a while.

                                                                                                  1. 4

                                                                                                    Really love your setup. I am curious, for making modifications to the keyboard to write everything in C or you use some kind of wrapper language to make things easier for you to edit?

                                                                                                    1. 3

                                                                                                      I write everything in C/C++, mostly because that’s the most efficient way for me. I know the languages well enough to be comfortable with them, to not need any abstraction over it. Besides, a lot of my customisations are just configuring plugins, not much to make nicer there.

                                                                                                      The problem with trying to come up with a wrapper language is that it needs to generate pretty efficient code. There are a lot of shortcuts made all through the firmware to make it all fit into 28k, with all the bells and whistles. With this restriction, it’s not easy to build a DSL on top of it.

                                                                                                  1. 17

                                                                                                    I’ve heard the “binary logs are evil!!!” mantra chanted against systemd so many times that it wasn’t funny anymore. It’s a terrible argument. With so many big players putting their logs into databases, the popularity of the ELK stack, it is pretty clear that storing logs in non-plaintext format works. Way back in 2015, I wrote two blog posts about the topic.

                                                                                                    The gist of it is that binary logs can be awesome, if put to good use. That the journald is not the best tool is another matter, but journald being buggy doesn’t mean binary logs are bad. It just means that the journald is possibly not the most carefully engineered thing out there. There are many things to criticize about systemd and the journal, and they both have their fair share of issues, but binary storage of logs is not one of them.

                                                                                                    1. 10

                                                                                                      Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?

                                                                                                      The journald/systemd people don’t act like they have any clue what’s going on in the real world: people can’t use the tools they used to, and these tools evidently suck; Plain text sucked less, so what’s the plan to get anything better?

                                                                                                      1. 8

                                                                                                        I don’t think that’s entirely reasonable. It’s converting a complaint about principle (“don’t do binary logs”) into a complaint about practice, and that makes a big difference. If journald is a bad implementation of an ok idea, that requires very different steps to fix than if it’s a fundamentally bad idea.

                                                                                                        What you’re describing makes sense for people on the systemd project to say (“woah, people hate our binary logs, maybe we should work on them”[0]), but not for the rest of us trying to understand things.

                                                                                                        [0] I fear they’re not saying that, as they seem somewhat impervious to feedback

                                                                                                        1. 2

                                                                                                          I feel like @geocar is against binary logs as a source format, but not as an intermediate or analytics format. Even if your application uses structured logging, it can still be stored in a text file, for example as JSON, at the source. It can be converted to a binary log later in the chain, for example on a centralized logging server, using ELK, SQL, MongoDB, Splunk or whatever. The benefit is that you keep a lot of flexibility at the source (in terms of supporting multiple formats depending on the source application) and are still able to go back to the plain text log if you encounter a problem.

                                                                                                          1. 4

                                                                                                            I’m not even against binary logs “as a source format.”

                                                                                                            Firstly: I recognise that “complaints about binary logs” is directed at journald and isn’t the same thing about complaints about logs in some non-text format.

                                                                                                            I think getting systemd in deep forced sysadmins to retool on top of journald and that hurt a lot for so very little gain (if there was any gain at all- and for most workflows I suspect there wasn’t). This has almost certainly put people off of binary logs, and has almost certainly got people complaining about binary logs.

                                                                                                            To that end: I don’t think those feelings around binary logs are misplaced.

                                                                                                            Some humility is [going to be] required when trying to win people over with binary logs, but appropriating the term “binary logs” to include tools the sysadmin chooses is like pulling the rug out from under somebody, and that’s not helping.

                                                                                                            1. 2

                                                                                                              Thank you very much for clarifying. I agree that forcing sysadmin “to retool on top of journald” hurts.

                                                                                                          2. 2

                                                                                                            No, it’s recognising that when enough people are complaining about “the wrong thing”, telling them it’s the wrong thing doesn’t help them. It just causes them to dig in.

                                                                                                            What’s the right thing?

                                                                                                            I think that’s the point of the bug…

                                                                                                          3. 1

                                                                                                            Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?

                                                                                                            As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.

                                                                                                            and these tools evidently suck

                                                                                                            For a lot of use cases, they do not suck. For many, they are a vast improvement over text logs.

                                                                                                            what’s the plan to get anything better?

                                                                                                            Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.

                                                                                                            1. 9

                                                                                                              As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.

                                                                                                              People complain about things that hurt, and between Windows and journald it should not be a surprise that “binary logs” is getting the flak. journald has a lot of outreach work to do if they want to fix it.

                                                                                                              For a lot of use cases, [the tools] do not suck. For many, they are a vast improvement over text logs.

                                                                                                              And yet when programmers make mistakes implementing them, the sysadmin are left cleaning up after them.

                                                                                                              Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.

                                                                                                              Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge. This allows people to get a lot of the advantages of binary logs with few disadvantages (and given how cheap disk is, the price is basically zero).

                                                                                                              Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.

                                                                                                              These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.

                                                                                                              Are we really to interpret this as refuse to install any software that doesn’t follow this rule?

                                                                                                              I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.

                                                                                                              1. 4

                                                                                                                Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.

                                                                                                                What do you mean by a “transparent structuring layer”?

                                                                                                                1. 2

                                                                                                                  Something to structure the plain text logs into some tagged format (like JSON or protocol buffers).

                                                                                                                  Splunk e.g. lets users create a bunch of regular expressions to create these tags.

                                                                                                                  1. 2

                                                                                                                    Got it now. Thanks for clarifying!

                                                                                                                2. 0

                                                                                                                  Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.

                                                                                                                  For some values of “can do”, yes. Most traditional text logs are terrible to work with (see my linked blog posts, not going to repeat them here, again). Besides, as long as your journal files aren’t corrupt (which happens less and less often these days, I’m told), you can just use journalctl to dump the entire thing, and grep in the logs, just like you grep in text files. Or filter them first, or dump in JSON and use jq, and so on. Plenty of options there.

                                                                                                                  Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.

                                                                                                                  Clearly our experience differs. Most syslog-ng PE customers (and customers of related products) made binary logs (either PE’s LogStore, or an SQL database) their golden source of knowledge. A lot of startups - and bigger businesses - outsourced their logging to services like loggly, which are a black box like binary logs.

                                                                                                                  These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.

                                                                                                                  These are directions to sysadmins too. The majority of daemons support logging to files, or use a logging framework where you can set them up to log directly to a central collector, or to a database directly. For a huge list of applications, bypassing syslog has been there since day one. Apache, Nginx, pretty much any Java application can all do this, just to name a few things. There are some notable exceptions such as postfix which will always use syslog, but there are ways around that too.

                                                                                                                  You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.

                                                                                                                  I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.

                                                                                                                  With the journal, you have journalctl, which is quite well documented.

                                                                                                                  1. 2

                                                                                                                    Clearly our experience differs. Most syslog-ng PE customers…

                                                                                                                    Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?

                                                                                                                    outsourced their logging to services like loggly, which are a black box like binary logs.

                                                                                                                    I would be surprised to find that most people that use loggly don’t keep any local syslog files.

                                                                                                                    What exactly are you arguing here?

                                                                                                                    Plenty of options there.

                                                                                                                    And?

                                                                                                                    You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.

                                                                                                                    Right, and the goal is to get people using journald right?

                                                                                                                    If journald doesn’t want to be used, what it’s reason for existing?

                                                                                                                    1. 0

                                                                                                                      Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?

                                                                                                                      Yes.

                                                                                                                      I would be surprised to find that most people that use loggly don’t keep any local syslog files.

                                                                                                                      Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate. In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.

                                                                                                                      Right, and the goal is to get people using journald right?

                                                                                                                      For systemd developers, perhaps. I’m not one of them. I don’t mind the journal, because it’s been working fine for my needs. The goal is to show that you can bypass it, if you don’t trust it. That you can get to a state where your logs are processed and stored efficiently, in a way that is easy to work with - easier than plain text files. Without using the journal. But with it, it may be slightly easier to get there, because you can skip the whole getting around it dance for those applications that insist on using syslog or stdout for logging.

                                                                                                                      1. 2

                                                                                                                        Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?

                                                                                                                        Yes.

                                                                                                                        I think you’re completely wrong.

                                                                                                                        There are a lot of Debian/RHEL/Ubuntu/*BSD (let alone Windows) machines out there, and they’re definitely not using syslog-ng by default…

                                                                                                                        Debian publishes install information: syslog-ng verus rsyslogd. It’s no contest.

                                                                                                                        A big bank I’m working with has zero: all rsyslogd or Windows.

                                                                                                                        Also, the world is moving to journald…

                                                                                                                        So, why exactly do you believe this?

                                                                                                                        In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.

                                                                                                                        Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate.

                                                                                                                        Okay, but why do you think this contradicts what I say?

                                                                                                                        You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.

                                                                                                                        The goal is to show that you can bypass it, if you don’t trust it.

                                                                                                                        Ah well, this is a very different topic than what I’m replying to.

                                                                                                                        I can obviously bypass it by not using it.

                                                                                                                        I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.

                                                                                                                        1. 1

                                                                                                                          I think you’re completely wrong.

                                                                                                                          I think I know better how many syslog-ng PE customers there are out there (FTR, I work at BalaBit, who make syslog-ng). It has a significant market share. Significant enough to be profitable (and growing), in an already crowded market.

                                                                                                                          A big bank I’m working with has zero: all rsyslogd or Windows.

                                                                                                                          …and we have big banks who run syslog-ng PE exclusively, and plenty of other customers, big and small.

                                                                                                                          Also, the world is moving to journald…

                                                                                                                          …and syslog-ng plays nicely with it, as does rsyslog. They nicely extend each other.

                                                                                                                          You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.

                                                                                                                          I think we’re misunderstanding each other… What I consider the golden source may be very different from what you consider. For me, the golden source is what people use when they work with the logs. It may or may not be the original source of it.

                                                                                                                          I don’t care much about the original source (unless it is also what people query), because that’s just a technical detail. I don’t care much how logs get from one point to another (though I prefer protocols that can represent structured data better than the syslog protocol). I care about how logs are stored, and how they are queried. Everything else is there to serve this end goal.

                                                                                                                          Thus, if an application writes its logs to a text file, which I then process and ship to a data warehouse, I consider that to be binary logs, because that’s how it will ultimately end up as. Since this warehouse is the interface, the original source can be safely discarded, once it shipped. As such, I can’t consider those the golden source.

                                                                                                                          If we restricted “binary logs” to stuff that originated as binary from the application, then we should not consider the Journal to use binary logs either, because most of its sources (stdout and syslog) are text-based. If the Journal uses binary logs, then anything that stores logs as binary data should be treated the same. Therefore, everything that ends up in a database, ultimately makes use of binary logs. Even if their original form, or the transports they arrived there, were text.

                                                                                                                          (Transport and storage are two very different things, by the way.)

                                                                                                                          I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.

                                                                                                                          I never said they are. All I said is that storing logs in binary is not inherently evil, linked to blog posts where I explain pretty much the same thing, and give examples for how binary storage of logs can improve one’s life. (Ok, I also asserted that syslog and stdout are terrible interfaces for logs, and I maintain that. This has nothing to do with text vs binary though - it is about free-form text being awful to work with; see the linked blog posts for a few examples why.)

                                                                                                                          1. 1

                                                                                                                            I think I know better how many syslog-ng PE customers there are out there

                                                                                                                            Or we just have different definitions of significant.

                                                                                                                            Significant enough to be profitable (and growing), in an already crowded market.

                                                                                                                            Look, I have an advertising business that makes enough money to be profitable, and is growing, but I’m not going to say I have a “significant” market share of the digital advertising business.

                                                                                                                            But whatever.

                                                                                                                            All I said is that storing logs in binary is not inherently evil

                                                                                                                            And I didn’t disagree with that.

                                                                                                                            If you try and re-read my comments knowing that, maybe it’ll be more clear what I’m actually pointing to.

                                                                                                                            At this point, we’re just talking past each other, and there’s no point in that.

                                                                                                            2. 2

                                                                                                              Thanks for linking to the blog posts, they were most informative.

                                                                                                            1. 8

                                                                                                              I like small things. Heck, I work on keyboard firmware that has to fit into 32k, including the boot loader. But a tiny UNIX-like OS for desktop use is, I think, a bit pointless. As an experiment, a learning tool - yeah, that can work. But then you’re pretty much limited to VMs or a very, very tiny subset of existing hardware. Neither of them is all that useful past the learning stage. Besides, modern hardware typically requires more kernel code, so not focusing on old hardware, and only targeting the modern, you won’t be saving all that much. The other way around would yield smaller results.

                                                                                                              Mind you, there’ve been UNIX ports to 8-bit AVR, which is even smaller than a floppy disk. But it’s a tad limited, and kind of ancient.

                                                                                                              With that in mind, I’d suggest targeting something else than x86/x86_64. Something smaller, like AVR, or a 16-bit MCU. They’re more specialized, require less code to support, and so on. Sounds like a way more interesting project than targeting the desktop.

                                                                                                              1. 2

                                                                                                                The UDP port knocking part is funky, in a good way. It is the reason I’m going to play with Oxy a little.

                                                                                                                Mind you, port knocking daemons already exist that do something similar: run some code whenever someone knocks on a port. That code can open holes in the firewall, to allow access to the hidden service, from the knocking source. It shouldn’t be too hard to restrict things further so not only the knocked ports matter, but the contents of the knocks too. This feature does not need to live in the daemon itself - yet, it is novel for Oxy to include such a feature.

                                                                                                                1. 12

                                                                                                                  It’s just installing software. It’s not complicated unless you make it complicated.

                                                                                                                  Yeah, no. It would be great if this would be the case, if preparing an image was so simple, and any complication were our fault, but alas, we are sadly very far from that. Things would be easier if we didn’t want to share parts of our image - but we do. Or if we never wanted to upgrade. Or upgrade a component in one image, but not the other.

                                                                                                                  While I’ll be the last to praise Docker, it does bring some useful tools to the table. There are other solutions that provide similar features, of course, but asserting that preparing an image is only complicated if we make it so, is shortsighted.

                                                                                                                  I think you could reimplement it easily yourself with a small shell script and some calls to mount; but I haven’t bothered.

                                                                                                                  I’d encourage the author to try. It would be an educational endeavour.

                                                                                                                  And if you want to do some kind of change tracking as you build the system, you should keep it at the proper layer, … [this-and-that should live here-and-there]

                                                                                                                  If only things were this easy! Again, I’d encourage the author to try doing this with anything non-trivial, and maintain it for a few months. Naive ideas like this fall apart very quickly. A shame, really, but they do.

                                                                                                                  For most purposes, the main interesting thing that Docker containers provide is isolated networking. That is, Docker containers prevent the application inside the container from binding ports on the external network interfaces. What else prevents applications from using ports? The firewall that you already have installed on your server. Again, pointless abstraction to address already-solved problems.

                                                                                                                  As the author correctly states, such isolation is kinda pointless from a security point of view. But it is not security this isolation is useful for. It is isolation itself. It allows the container to not care much about the world outside. It makes it easier to have many small networks, living in their own little world, where I usually don’t need to adjust the global firewall on the host if I want to launch a new instance. I don’t need to have a global state that knows everything. I can have little components. A lot easier to maintain in the long run. Think of small, composeable functions vs a huge single-function state machine.