1. 14

    This blog post: a case study in being a jerk to someone who is being a jerk, only since Linus is a “jerk” you get off scott-free. Unsurprisingly, this is written by someone who has never contributed to the Linux kernel and who was uninvolved in the discussion he’s picking apart.

    The revised email at the end does lose information. Contrary to what hipsters write blog posts complaining about, 99% of Linus’s emails are cordial. The information that’s lost is the conveyance that this is more important to Linus than most subjects.

    1. 19

      This comment: a case study in being a jerk to someone who is being a jerk to a jerk.

      In all seriousness, I don’t believe that Gary Bernhardt is being a jerk at all. There’s a line between being critical of a piece of work and calling someone brain damaged, and hopefully, we all can see the difference.

      Aside: I love when people use the word “hipster” to invalidate other viewpoints. Apparently, there are two modes of being: Being Right and Being A Hipster.

      1. 2

        To the unserious comment, I don’t think I was being a jerk. I called him a jerk, which I guess you could argue is a jerk move under any circumstances, but if I’m being a jerk then so is Gary.

        To the serious comment, I just want to note that “brain damaged” is a meme among old school hackers which isn’t as strong of a word as you think.

        To the aside, I don’t use hipster as an insult or to imply wrongness, but I do use it to invalidate his point. Gary is a Ruby developer. Linus is a kernel developer. The worlds are far removed from each other.

        1. 47

          I’ve put tens of thousands of lines of C into production, including multiple Linux kernel drivers. In one case, those kernel drivers were critical-path code on a device used in strain testing the wings of an airplane that you might’ve flown in by now.

          I’m not a stranger to the kernel; I just left that world. Behavior like Linus’ in that email was part of the reason, though far from the only reason.

          With all of that said: having written a bunch of systems software shouldn’t be a prerequisite for suggesting that we avoid attacking people personally when they make programming mistakes, or what we suspect are programming mistakes.

          1. 10

            Exactly. I’ve also met many people that do high-performance, embedded, and/or safety-critical code in C that are more polite in these situations. Linus’ attitude is a separate issue from what’s necessary to evaluate and constructively criticize code.

          2. 16

            “brain damaged” is a meme among old school hackers which isn’t as strong of a word as you think.

            Yikes. That “meme” is a whole other thing I don’t even care to unpack right now.

            I don’t use hipster as an insult or to imply wrongness, but I do use it to invalidate his point. Gary is a Ruby developer. Linus is a kernel developer. The worlds are far removed from each other.

            Gotcha. Kernal developer == real old-school hacker. Ruby developer == script kiddie hipster. Are we really still having this argument in 2018?

            1.  

              Yikes. That “meme” is a whole other thing I don’t even care to unpack right now.

              “Brain damaged” is a term from back in the Multics days, Linus didn’t make that one up for the occasion. If you’re unfamiliar with the “jargon file” aka hacker dictionary, you can see the history of this particular term here: http://www.catb.org/jargon/html/B/brain-damaged.html

              1. 1

                Yikes. That “meme” is a whole other thing I don’t even care to unpack right now.

                Listen, cultures are different and culture shock is a thing. I’m in a thread full of foreigners shocked that customs are different elsewhere. You better just take my word for it on “brain damaged” because you clearly aren’t a member of this culture and don’t know what you’re talking about.

                Gotcha. Kernal developer == real old-school hacker. Ruby developer == script kiddie hipster. Are we really still having this argument in 2018?

                How about you quit putting words in my mouth? Do you really need me to explain the world of difference between Ruby development and kernel hacking? In 2018? It’s not a matter of skill. Gary is great at what he does, but it has almost nothing to do with what Linus does. The people who surround Gary and the people who surround Linus are mutually exclusive groups with different cultural norms.

                1. 18

                  You can’t use “it’s our culture” as a panacea; calling someone an idiot, moron etc. is a deliberate attempt to hurt them. I guess if what you’re saying is, “it’s our culture to intentionally hurt the feelings of people who have bad ideas,” well, then we might be at an impasse.

                  1. 20

                    The kind of toxic exclusivity and “old school hacker culture” elitism that you’re spouting in this thread is not what I expect to see on Lobsters. It makes me genuinely sad to see somebody saying these things and it also makes me apprehensive of ever being involved in the same project or community as you. Software development today is not what it was 20 –or even 5– years ago. Today it is far more about people than it is about software or technology. You may not like this, but it is the reality.

                    1. 7

                      Lobste.rs always had a few vocal people like this in threads. But note that they’re in the minority and generally are not upvoted as much as the people who aren’t elitist, racist, or just generally being a jerk.

                      1. 5

                        “old school hacker culture” elitism

                        Near 40, I can agree to be called old. But not elitist.
                        And I cannot accept to be associated with racist.

                        Not all software developers are hackers. Not all hackers are software developers.

                        Is stating this “elitism”? Is it “racism”? Is it being “jerk”?
                        Or is just using terms properly?

            2. 5

              The information that’s lost is the conveyance that this is more important to Linus than most subjects.

              So add “I want to stress that this issue is really important to me” at the end of the revised email.

              I think that making an issue out of this particular information being lost is missing the point - that it would be possible to say the same thing as Linus did without being abusive.

              Contrary to what hipsters write blog posts complaining about

              You’re falling into the same trap that the post discusses. This derision isn’t necessary to make your point, and doesn’t make it any stronger - it just adds an unnecessary insult.

              1. 9

                Contrary to what hipsters write blog posts complaining about, 99% of Linus’s emails are cordial.

                That may well be true, but do we need that last 1% in a professional setting?

                1. 9

                  (I am not defending Linus’ behaviour here, please don’t put those words in my mouth.)

                  I strongly take issue with American ideas of “professionalism”, and an even more so with the idea that we get to decide whether this project is “a professional setting” or not. What exactly makes this a “professional setting”? What is a “professional setting”? Why do we hold some interactions to higher standards than others?

                  I suspect “money changing hands” is the thing that makes this “a professional setting”, and that grinds my gears even further. Why are we supposed to hold ourselves to different standards just because some people are getting paid for doing it?

                  1. 3

                    Right, “professionalism” implies that you only need to be nice to somebody when you want them to something for you or want their money. This should actually be about “respect”, whether or not you want a Linux contributor to do something for you or want their money.

                  2. 13

                    The Linux kernel is not a professional setting. Besides, I argue that the 1% is useful, even in a professional setting - sometimes strong words are called for. I’ll be That Guy and say that people should grow a thicker skin, especially people who weren’t even the subject of the email and have never been involved in kernel development.

                    1. 13

                      If I look at who the contributors to the Linux kernel are, it would certainly appear to be a professional endeavor.

                      A large chunk of contributions to the kernel are made by people who are getting paid by the companies they work for to contribute. Sounds like a professional setting to me.

                      1. 4

                        Linux development is only “a professional endeavour” (which is a phrase I have strong issues with, see above) because some people decided to build their businesses in Linus’ craft room. We can like or dislike Linus’ behaviour, but we don’t get to ascribe “professionalism” or lack thereof (if there even is such a thing) to Linus’ work or behaviour, or that of any of the contributors.

                        Even if “professionalism” is an actual thing (it’s not; it’s just a tool used by people in power to keep others down) it’s between the people doing the paying, and the people getting the pay, and has nothing to do with any of us.

                        This idea that people should behave differently when there’s money involved is completely offensive to me.

                        1. 7

                          But it’s not. It’s a collaboration between everyone, including professionals and hobbyists. The largest group of kernel contributors are volunteers. On top of that, Linus doesn’t have to answer to anyone.

                          1. 8

                            So, having a hobbyist involved means that you can be dickhead? Is that the conclusion that should be drawn from your statements?

                            1. 3

                              No. I’m saying that Linus is not a dickhead, Linux is not a professional endeavour, and neither should be held to contrived professional standards.

                              1. 2

                                “I’m saying that Linus is not a dickhead”

                                His comments are proving otherwise given the main article shows the same information could’ve been conveyed without all the profanity, personal insults, and so on. He must be adding that fluff because he enjoys it or has self-control issues. He’s intentionally or accidentally a dick. I say that as a satirist whose a dick to people that give me headaches in real life. Although it doesn’t take one to know one, being someone whose always countering dicks and assholes with some dickish habits of his own makes what Linus is doing more evident. If no mental illness, there’s little excuse past him not giving a shit.

                                1. 5

                                  “doesn’t behave according to my cultural norms” == “mental illness”

                                  Seriously?

                                  I would really appreciate it if you could stop expecting that your cultural norms have to apply to everyone on the planet.

                                  1. 1

                                    Im identifying the cultural norm of being an asshole, saying it applies to him at times, and saying the project would benefit if he knocked if off. Im not forcing my norms on anyone.

                                    Your comment is more amusing giving someone with Linus’s norns might just reply with profanity and personsl insults. Then, you might be complaining about that. ;)

                                    1.  

                                      Then, you might be complaining about that. ;)

                                      No, I’d just accept that people from different cultures behave differently.

                                      Let’s face it, most people hate getting told they are wrong, regardless of the tone. That’s just how we are as humans.

                                      Taking offense about the tone just seems very US-specific, as they are accustomed to receiving some special superpowers in a discussion by uttering “I’m offended”.

                                      Some of the best feedback I received in my life wouldn’t be considered acceptable by US standards and I simply don’t care – I just appreciate the fact that someone took his time to spell out the technical problems.

                                      Here is a recent example: https://github.com/rust-lang/cargo/pull/5183#issuecomment-381449546

                                      1.  

                                        Then you and I arent that different in how we look at stuff. Ive just layered on top of it a push for project owners to do what’s most effective on social side.

                                  2. 2

                                    I believe it’s intentional. He does not want to be bothered by nurturing the newbs, so he deters them from going to him directly and forces them to do their learning elsewhere.

                                  3. 2

                                    These numbers suggest it is a professional endeavor:

                                    https://thenewstack.io/contributes-linux-kernel/

                                    1. 2

                                      Those numbers just break down the professionals involved, and don’t consider the volunteers. If you sum the percentages in that article you get around 40%. Even accomodating for smaller companies that didn’t make the top N companies, that’s a pretty big discrepancy.

                            2. 6

                              Linus himself is working in a professional capacity. He’s employed by the Linux Foundation to work on Linux. The fact he is employed to work on an open source project that he founded doesn’t make that situation non-professional.

                        1. 4

                          I did this in Python recently, maybe someone will find it interesting:

                          https://git.sr.ht/~sircmpwn/srht/tree/srht/markdown.py#n55

                          1. 4

                            The global menu bar issue is something I deeply care about. Linux has a very good global menu bar, but people either are moving away from it or do not know that it is available. Just use Ubuntu 16.04 and see in action.

                            Since 2014 Ubuntu modified GTK and Qt so that all desktop applications made use of a global menu bar. Everything works now perfectly. Ubuntu 16.04 with the default Unity desktop is a very usable desktop. All my non-techy acquaintances like it. These modifications have, however been refused upstream, because they do not fit the GNOME 3 paradigm (most of which I like).

                            I really do not understand why people are against global menus. They are better, scientifically proven better. And they save a lot of vertical space, that in modern super-wide monitors is a precious resource.

                            Why doesn’t the global menu bar receive the love it deserves?

                            1. 2

                              They are better, scientifically proven better

                              Citation needed

                              1. 3

                                From Fitt’s law [1] and Steering Law [2] comes that global menu bars are much easier to access.

                                Fitt’s law tells you that global menu bars are better because they can be reached by moving the cursor to an infinitely big target [3]. In other words, you can throw your mouse pointer somewhere up and it will surely and easily reach global menu bar.

                                Steering Law tells you navigating along/inside a vertical or horizontal tunnel is hard if the tunnel is thin (hello badly implemented JS menus that disappear when you move to a submenu). In the case of a global menu bar navigating it is easy because it is infinitely tall, just push your cursor slightly up.

                                Global menu bars are easier to access, but are they faster to access? This a good question, because, on average, the global menu bar is farther away than the local menus. It turns out that, on average, they are equally fast to access. [4] Windows requires more aiming precision (slower) but less travel distance (faster). MacOS requires less aiming precision (faster) but more travel distance (slower).

                                All things being equal, simplicity should always preferred, because it means that more people can fruitfully use a system, for example people with disabilities.

                                1. P.M. Fitts: The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47, 381–391 (1954)
                                2. J. Accot, S. Zhai: Beyond Fitts’ law: Models for trajectory-based HCI tasks. In: CHI 1997: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 295–302. ACM, New York (1997)
                                3. A. Cockburn, C. Gutwin, S. Greenberg: A predictive model of menu performance. In: CHI 2007: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 627–636. ACM, New York (2007)
                                4. E. McCary, J. Zhang. GUI Efficiency Comparison Between Windows and Mac. In: HIMI 2013: Human Interface and the Management of Information. Information and Interaction Design pp 97-106, Springer (2013)
                                1. 1

                                  This makes sense, thank you for the detailed response.

                              2. 2

                                how does it play with focus modes other than click-to-focus? e.g. in focus-follows-mouse, if you have to move your cursor through another window en route to the global bar, it would rebind to the new application.

                                1. 2

                                  Focus-follows-mouse has a delay before switching applications. Move across fast, no app switching. Or go around (fairly easy with non-overlapping windows).

                                  1. 1

                                    I haven’t tried these global menus in Linux, as an Enlightenment user, but how long ia the delay and is it configurable?

                                    I’d tie it to motion, because I appreciate my desktop being fast and all kinds of stalls annoy me. I’d imagine this to be very true if I have to touch a pointer device.

                                    1. 1

                                      It appears to be a hard-coded 25ms delay, at least in GNOME shell. Others may implement it differently.

                              1. 6

                                Much simpler solution that often might be enough is xxd -i.

                                1. 4

                                  Author of koio here.

                                  objcopy is another simpler, good choice. I need koio to provide a stdio-like API and a consistent external baseline for how embedding files is done in chopsui so that it can be consistent between chopsui, third-party chopsui libraries, chopsui applications, and end-user extensibility. To that end I found it more appropriate to write a tool like this.

                                1. 4

                                  I’d like to spend one single day watching @SirCmpwn working. I don’t understand how someone with a regular job can get so much done in their free time.

                                  Keep up the work the great work! I’ll definitely migrate to sway as soon as the porting to wlroots is finished.

                                  1. 4

                                    I don’t understand how someone with a regular job can get so much done in their free time.

                                    With lots of help!

                                  1. 3

                                    It takes an awful lot of reading to work out what wlc even is. Even then I’m still not sure what it (or it’s replacement) does!

                                    1. 5

                                      Author here. There are links in the article to each project that describe what they are, but…

                                      Wayland is a replacement for X11, which are two protocols used for graphical sessions on Linux and other Unicies. wlc and wlroots are both libraries that implement some of the functionality a Wayland server needs.

                                    1. 7

                                      Why not just directly write man(7), which is all this tool produces? Or use the existing perlpod, pandoc, docbook, lowdown, rst2man, or any other tool doing exactly the same thing from diverse formats?

                                      Because I’m sure the world needs more opaque, un-indexable manpages.

                                      (Edit: to clarify, use mdoc(7).)

                                      1. 5

                                        Author here. Did you even read the blog post? I answered all of these questions.

                                        perlpod is built on a mountain of perl, and pandoc on a mountain of haskell. lowdown is a Markdown implementation, and Markdown and roff are mutually exclusive. RST and roff are mutually exclusive. I spoke about docbook directly in my article (via asciidoc, which is a docbook frontend). I also directly addressed mdoc.

                                        Man pages are already being indexed. If you search the web for “man [anything]” you’ll find numerous websites which scrape packages and convert the roff into HTML.

                                        1. 1

                                          Thanks for your hack. It’s a good candidate for a port in my little os.

                                          A couple of question:

                                          • have you considered to avoid the bold markers around man page refs as you already have the parentheses to identify the reference?
                                          • also section titles have conventional names: what about omitting the starting sharp to mark them as titles?
                                          • what about definition lists? (I know they are an HTML thing, but they can be useful to describe options for example)
                                          • I know tables are the most difficult format to express in a readable source form, but what alternatives did you considered and why you discarded them?

                                          And btw… Thanks again!

                                          1. 2

                                            Glad you like it!

                                            have you considered to avoid the bold markers around man page refs as you already have the parentheses to identify the reference?

                                            This is an interesting thought. https://todo.sr.ht/~sircmpwn/scdoc/12

                                            also section titles have conventional names: what about omitting the starting sharp to mark them as titles?

                                            I’m not fond of this idea. Given that lots of man pages will need to have section titles which fall outside of the conventinoal names, and that I want all headers to look the same, this isn’t the best design imo.

                                            what about definition lists? (I know they are an HTML thing, but they can be useful to describe options for example)

                                            man pages do “definition lists” with borderless tables, which are possible to write with scdoc like this

                                            |[ *topic*
                                            :[ definition
                                            |  *topic
                                            :  definition
                                            # etc
                                            

                                            I know tables are the most difficult format to express in a readable source form, but what alternatives did you considered and why you discarded them?

                                            The main approach I’ve seen elsewhere is trying to use something resembling ascii art to make tables look like tables in the source document. I’ve never been fond of this because you then have to do annoying edits when updating the table to keep all of the artsy shit intact, which in addition to being just plain annoying can also bloat your diffs, lead to more frequent merge conflicts, etc.

                                            An alternative some formats have used is to make aligning your columns optional, but still using an artsy-fartsy kind of style. I figure that if you’re going to make aligning the columns optional you no longer have any reason to require a verbose format like that. So I invented something more concise.

                                            Also, the troff preprocessor used for tables supports column alignment specifiers and various border styles, which I wanted to expose to the user in a concise way. Other plaintext table formats often have this feature but never concise.

                                            1. 1

                                              man pages do “definition lists” with borderless tables

                                              Do you think you could render something like this with scdoc in a source-readable way http://man7.org/linux/man-pages/man8/parted.8.html (see section OPTIONS and COMMAND)?

                                              The main approach I’ve seen elsewhere is trying to use something resembling ascii art to make tables look like tables in the source document.

                                              Actually it was what I was thinking about. You propose a good point, but my counter argument is that manual pages are (hopefully) read more often then they are written. But I admit that my goal is people using cat to read manual pages by default, so I can see how in a more conventional system using Troff the people most often read a rendered page, thus the annoyance is pointless. OTOH, it should be relatively easy to write a tool that take scdoc document as input and output another scdoc document where tables are automatically aligned, removing the annoyance to align the cells while writing.

                                              Having said that, I find your table syntax nice.
                                              I wonder if one could nest tables (I mean put a table in a cell). Also, you organize the table by rows, but given the format, some table might benefit from being organized by column.

                                              1. 2

                                                Do you think you could render something like this with scdoc in a source-readable way http://man7.org/linux/man-pages/man8/parted.8.html (see section OPTIONS and COMMAND)?

                                                You don’t actually even need tables for this. scdoc preserves your indent. https://sr.ht/I0g7.txt

                                                I wonder if one could nest tables (I mean put a table in a cell). Also, you organize the table by rows, but given the format, some table might benefit from being organized by column.

                                                I think nested tables is a WONTFIX. Also not sold on column-oriented tables. IMO man pages should be careful to keep their tables fairly narrow to stay within 80 characters.

                                                1. 1

                                                  Wow, that’s really readable!

                                                  Fine for nested tables. Just to be sure I explained what I meant by column-oriented (that just like nested tables might or might not be a good idea): suppose you want to create something like

                                                  English    Italian    Swahili
                                                  Hello!     Ciao!      Habari?
                                                  Tour       Viaggio    Safari
                                                  Lion       Leone      Simba
                                                  

                                                  You might prefer a syntax like

                                                  |[ English
                                                  :[ Hello!
                                                  :[ Tour
                                                  :[ Lion
                                                  |[ Italian
                                                  :[ Ciao!
                                                  :[ Viaggio
                                                  :[ Leone
                                                  |[ Swahili
                                                  :[ Habari?
                                                  :[ Safari
                                                  :[ Simba
                                                  

                                                  Or even, for such a simple table (that I don’t know if actually exists in a man page, so…), you could put each column (or row) in the same line:

                                                  |[ English :[ Hello! :[ Tour :[ Lion
                                                  |[ Italian :[ Ciao! :[ Viaggio :[ Leone
                                                  |[ Swahili :[ Habari? :[ Safari :[ Simba
                                                  

                                                  (that a tool could easily turn into:

                                                  |[ English :[ Hello!  :[ Tour    :[ Lion
                                                  |[ Italian :[ Ciao!   :[ Viaggio :[ Leone
                                                  |[ Swahili :[ Habari? :[ Safari  :[ Simba
                                                  

                                                  )

                                                  Ok… now I’ve really annoyed you enough for a single night… good work!

                                        2. 5

                                          Because you cannot have progress without research.

                                          Now troff is not readable in source form.
                                          This is better in this regard. You are right about indexing, but the project have a very short log. I guess we can talk about it with the author, and see what he think about that.

                                          Maybe he like the idea, and add it. Or he doesn’t, and will not add it.
                                          You will always be able to fork it and fine tune to you need.

                                          I’m grateful to hackers who challenge the status quo.

                                          1. 4

                                            While mdoc(7) is great (thanks for that!) , I think your questions are answered on the page. I think lowdown is probably the closest to what u/SirCmpwn was aiming for (no dependencies, man output), maybe they hadn’t seen it?

                                            Man formatting is inscrutable to the un-trained eye (most people), and we need to acknowledge the popularity of markdown is related to its ease of reading/writing.

                                            1. 4

                                              I think your questions are answered on the page. I think lowdown is probably the closest to what u/SirCmpwn was aiming for (no dependencies, man output), maybe they hadn’t seen it?

                                              groff (as installed on every Linux distribution that uses groff for man pages, which is basically all of them, and macOS) has had native support for mdoc for at least a decade. If you install an mdoc man page and then man $thepage, you get exactly what you expect.

                                          1. 5

                                            This is neat! I’ve been looking for a man page generator. I also use asciidoc and have had toolchain issues with it, particularly on macOS.

                                            I perused the source code a bit since I was curious about the combination of “no dependencies” and “UTF-8 support.” What, exactly, do you mean by UTF-8 support?

                                            I see you hand rolled your own UTF-8 handling, but what I don’t quite understand is why you did it in the first place. What I mean is that your parser only seems to care about ASCII, and you never actually take advantage of UTF-8 itself with one obvious exception: if the data you read isn’t valid UTF-8, then your parser sensibly gives up. Is there some other aspect of UTF-8 you’re using? Perhaps I’ve skimmed your code too quickly and missed it.

                                            I do see that you’re using the various POSIX char functions such as isdigit and isalnum, but those operate based on POSIX locale support, and aren’t, as far as I know, aware of the various Unicode definitions of those functions. Moreover, the documentation for those functions states that it is UB to pass a value that cannot be represented in an unsigned char.

                                            I’m not a C expert, so I could be missing something pretty basic!

                                            1. 3

                                              Hey, thanks for your feedback!

                                              I perused the source code a bit since I was curious about the combination of “no dependencies” and “UTF-8 support.” What, exactly, do you mean by UTF-8 support?

                                              Yeah, supporting it wasn’t too hard. I probably don’t have to explicitly handle it, but I prefer to enforce all input files must be UTF-8 and all output files must be UTF-8 rather than leave wiggle room. One intentional design decision of scdoc is that it is very strict - it will error out if you try to write #header instead of # header, for example. Enforcing UTF-8 is another form of strictness that ensures all scdoc files have a baseline of sanity.

                                              I do see that you’re using the various POSIX char functions such as isdigit and isalnum, but those operate based on POSIX locale support, and aren’t, as far as I know, aware of the various Unicode definitions of those functions. Moreover, the documentation for those functions states that it is UB to pass a value that cannot be represented in an unsigned char.

                                              I should probably enforce that characters I feed into this are <0x80. Good catch, filed a bug:https://todo.sr.ht/~sircmpwn/scdoc/13

                                              1. 2

                                                Ah, yeah, that makes sense. Starting with strict validation on UTF-8 is smart. :-)

                                            1. 14

                                              Raw roff markup is alien and mysterious to modern sensibilities, and a Markdown-family language based on the manpage document model rather than the HTML document model is a cool idea.

                                              Unfortunately, just as Markdown is geared for presentational HTML rather than semantic (there’s no easy way to add a class to a block of markup, for example), it seems scdoc is geared for presentation man(7) output rather than semantic mdoc(7). I can’t really fault the author; I’m not even sure what a semantic Markdown-alike could look like. But it wasn’t too hard for me to learn mdoc(7) myself, and I’ll probably stick with it.

                                              1. 3

                                                These are good points. I deliberately chose not to expose to the user any control over the low-level roff output, and also deliberately chose not to add semantic components because the output is only ever going to be man pages. I’m of the opinion that using several specialized tools (for each facet of your documentation, be it man pages or HTML pages or PDFs) is better than attempting to fit one octopus-shaped peg into several holes, and this principle feeds directly into scdoc’s design.

                                                That being said, it’s totally valid to hold other viewpoints. For some people mdoc may be better. For myself, it’s not.

                                              1. 2

                                                mdocml is small and has minimal dependencies, but it has runtime dependencies - you need it installed to read the man pages it generates. This is Bad.

                                                mdoc is part of the system. I guess not on Linux??

                                                1. 3

                                                  mdoc is part of the system on Linux too.

                                                  1. 3

                                                    Depends on the Linux.

                                                    1. 1

                                                      Do you have any particular distribution in mind where it isn’t?

                                                  2. 1

                                                    Guess what? There is life outside Unix! :-D

                                                  1. 9
                                                    Google embraces, extends, and extinguishes

                                                    Published 2018-05-03 on Drew DeVault’s blog

                                                    One day later…

                                                    2018-05-04 18:12 UTC: I retract my criticism of Google’s open source portfolio as a whole, and acknowledge their positive impact on many projects. However, of the projects explicitly mentioned I maintain that my criticism is valid.

                                                    Now… this is scary.

                                                    1. 6

                                                      Author here, emphasis yours. Why is this scary? I was privately shown many counterexamples of Google being a good actor in open source after publishing this post. This does not excuse the rest of their behavior.

                                                      1. 4

                                                        I’m not the author of the above comment, but I think they share my view that in light of the rapid retraction, the tone of the post comes across as rather overconfident. I’m not sure why they chose the word “scary”, but something does seem off about making such a wide blanket statement evidently without having done the requisite research.

                                                        1. 7

                                                          No I just felt smell of lawyers…

                                                          1. 1

                                                            I wrote of Google’s open source from my own experiences, not from research. I admit I should have researched it more, which is why I wrote the retraction. Anyway, I got some confusion from other sources as well so I published another update:

                                                            Apparently the previous retraction caused some confusion. I am only retracting the insinuation that Google isn’t a good actor in open source, namely the first sentence of paragraph 6. The rest of the article has not been retracted.

                                                          2. 2

                                                            I still think it’s a valid criticism. Google’s open source seems to fall into three categories: Projects that exist to drive demand for their core business, and erect a barrier for competitors (Chrome, Android, …), Projects that exist to support the former category, and things that are harmless to the business, but keep engineers happy.

                                                            Google is big enough that it’s easy for them to embrace one strategy for some projects, and another for others.

                                                            Full disclosure: I’m an ex-googler, with a decent amount of discomfort about the direction that the web (and, more generally, tech) seems to be taking.

                                                            1. 2

                                                              I definitely agree as far as Google’s own projects are concerned. What was pointed out to me is their substantial and quiet contributions to other projects.

                                                            2. 1

                                                              Sorry… having saw what these companies can do, I felt smell of lawyers.
                                                              As Facebook have shown everybody in early CA days, they care about free speech just when what is said conforms to their interests.

                                                              I completely agree with your article, btw.

                                                              Google (or Microsoft or Apple or Facebook or…) playing as a good actor sometimes does not means they do not embraces, extends and extinguishes.

                                                              It’s just a matter of what is in their current interests and long term goals.
                                                              It’s a marketing tool, after all: to keep it effective, they must play as the good guys most of times.

                                                              However I’d like to read about the counter examples: I already had noticed the trend you describe in the article and got the same conclusions.
                                                              Maybe I could stand corrected as you were.

                                                              1. 1

                                                                I wrote another update which may clarify:

                                                                Apparently the previous retraction caused some confusion. I am only retracting the insinuation that Google isn’t a good actor in open source, namely the first sentence of paragraph 6. The rest of the article has not been retracted.

                                                          1. 2

                                                            Why this over something else?

                                                            1. 16

                                                              Good bittorrent daemons are hard to find. rTorrent is common but it tacks on this difficult curses interface you have to deal with. Transmission is okay but it tends to get buggy and break down at scale. Deluge is buggy as hell. btpd is too bare bones, a lot of important features are missing. All of these options also have really poor RPC protocols that use a lot of network, are annoying to write clients for, and don’t scale.

                                                              synapse focuses only on being a good daemon and delivers only that. The UIs are offloaded to separate projects. If that doesn’t seem like much, that’s because it’s not - but surprisingly this is not easy to find. We made it because there were no good options.

                                                              1. 3

                                                                Transmission is okay but it tends to get buggy and break down at scale.

                                                                I must not have used it at scale before, it always seems to work for me. What sort of failure modes do you observe? Corrupted downloads? Halted downloads w/peers available? Other?

                                                                1. 9

                                                                  Transmission crawls to a halt if you have several hundred torrents. The RPC protocol also becomes unweildy, because it polls for updates and has to resend large amounts of data every time it refreshes. Synapse is more performant with large torrents or a large number of torrents and the RPC is push based, with subscriptions and differential updates.

                                                                2. 2

                                                                  Has synapse been tested at scale then?

                                                                  Everything I’ve tried has been horrible at scale except rTorrent, and most of the non-rTorrent choices can be pretty horrible even when at a modest amount of torrents (qBittorrent at a certain point ‘invisibly’ adds torrents etc.). With rutorrent as the frontend, I’ve been pretty happy with rTorrent.

                                                                  Synapse looks interesting, though I’m not terribly enthuasiastic about the node.js webclient (the node.js Flood client uses significantly more resources on my system than does the php rutorrent).

                                                                  1. 4

                                                                    Receptor is 100% frontend, pure static content. Node is just used for compiling and packaging it. You don’t even have to install it yourself - a hosted version is available at web.synapse-bt.org.

                                                                    1. 3

                                                                      I’ve done load testing (though not particularly realistic) and it appears that both synapse and receptor perform reasonably at the order of 1000 torrents. One of the goals of the project is to perform well at scale and there’s been a fair amount of ongoing work to achieve that.

                                                                    2. [Comment removed by author]

                                                                      1. 2

                                                                        Sequential downloading and file priority are both implemented.

                                                                      2. 1

                                                                        offloading UI to separate projects

                                                                        I love that approach!

                                                                    1. 2

                                                                      I wonder if this could be used as the basis for a new and better desktop torrent client?

                                                                      1. 16

                                                                        For sure. I also wrote a tool that translates Synapse RPC into Transmission RPC, too, so any Transmission desktop clients will work with Synapse:

                                                                        https://broca.synapse-bt.org/

                                                                        1. 2

                                                                          This is awesome. I’m thinking of writing a docker-compose file to integrate these into an easily usable desktop setup

                                                                      1. 9

                                                                        I spent two years in Thailand fully-funded off my FOSS project, but mine was hardware instead of software, so it was a lot easier to monetize. But I wouldn’t even consider doing it in the US with the healthcare situation the way it is. I’m surprised that didn’t come up in the article.

                                                                        1. 5

                                                                          It’s pretty funny how the US ends up being where a bunch of people go to take major risks, but the social net there is so lacking.

                                                                          Meanwhile in most countries in Europe you’ll never go bankrupt from healthcare, and there’s usually some form of unemployment insurance you can continue using for the first couple of months making a new company

                                                                          The usual thing is “well in America the rules are nicer on smaller companies” but… really? To the extent where the safety net is not worth it?

                                                                          1. 2

                                                                            Author here. I don’t have a degree, so immigration is much more difficult for me.

                                                                          1. 1

                                                                            Is “withdrawl” a different way of spelling “withdrawal”? Thought it was a typo, but the post consistently spells it that way.

                                                                            1. 2

                                                                              Perhaps the author actually thinks that is how it is spelled.

                                                                              1. 1

                                                                                Spelling has never been my strong suit! I will get that fixed.

                                                                              1. 18

                                                                                Neat, but you may want to disclose in fine print somewhere that you’re the author of fosspay, even though it’s free/libre software and not a platform you’re pushing.

                                                                                1. 2

                                                                                  Fair point. This wasn’t meant to be an endorsement of any of these platforms (after all, there are situations where either of the other processors get more of your money into the creator’s pocket), but I will add a note.

                                                                                1. 3

                                                                                  There’s a cool flag that makes it so you don’t have to reap the process, too, which is nice because reaping children is another really stupid idea.

                                                                                  I… is it? It doesn’t seem a wholly unreasonable way to arrange to get the exit status (or other termination details) of your child processes.

                                                                                  1. 4

                                                                                    Author here. I originally expanded on this in my first draft, but cut it out to balance complaints with solutions better. In my opinion, waiting on your children is fine if you can afford to block, and if not you have to set up SIGCHLD handlers, which is a non-trivial amount of code and involves signal handling, which is a mess in its own right and can easily be done incorrectly. Or you can use non-blocking waitpid, but that wasn’t a thing until recently. In all of these cases, if the parent doesn’t do its job well, your process table is littered with a bunch of annoying dead entries.

                                                                                  1. 15

                                                                                    Such pointless posturing and inflammatory piece of nonsense all in one (we’re talking about ~300 lines of code here).

                                                                                    The whole schtick that Nvidia does this out of spite is just short-fused idiocy. It doesn’t take much digging into the presentations that Nvidia has held at XDC2016/2017 (or just by asking aritger/cubisimo,… directly) to understand that there are actual- and highly relevant- technical concerns that make the GBM approach terrible. The narrative Drew and other are playing with here - that there’s supposedly a nice “standard” that Nvidia just don’t want to play nice with is a convenient one rather than a truthful one.

                                                                                    So what’s this actually about?

                                                                                    First, this is not strictly part of Wayland vs X11. Actually, there’s still no accelerated buffer passing subprotocol accepted as ‘standard’ into Wayland, the only buffer transfer mechanism part of the stable set is via shared memory. Also, the GBM buffer subsystem is part of Xorg/DRI3 so the fundamental problem applies there as well.

                                                                                    Second, this only covers the buffer passing part of the stack, the other bits - API for how you probe and control displays (KMS) is the same, even with the Nvidia blobs, you can use this interface.

                                                                                    Skipping to what’s actually relevant - the technical arguments - the problem they both (EGLStreams, GBM) try to address is to pass some kind of reference to a GPU-bound resource from a producer to a consumer in a way that the consumer can actually use as part of its accelerated drawing pipeline. This becomes hairy mainly for the reason that the device which produces the contents is not necessarily the same as the device that will consume it - and both need to agree on the format of whatever buffer is being passed.

                                                                                    Android has had this sorted for a long while (gralloc + bufferQueues), same goes for iOS/OSX (IOSurface) and Windows. It’s harder to fix in the linux ecosystem due to the interactions with related subsystems that also need to accept whatever approach you pick. For instance, you want this to work for video4linux so that a buffer from a camera or accelerated video decoding device can be directly scanned out to a display without a single unnecessary conversion or copy step. To be able to figure out how to do this, you pretty much need control and knowledge of the internal storage- and scanout formats of the related devices, and with it comes a ton of politics.

                                                                                    GBM passes one or several file descriptors (contents like a planar YUV- video can have one for each plane-) from producer to consumer with a side-channel for metadata (plane sizes, formats, …) as to the properties of these descriptors. Consumer side collects all of these, binds into an opaque resource (your texture) and then draws with it. When there is a new set awaiting, you send a release on the ones you had and switch to a new set.

                                                                                    EGLStreams passes a single file descriptor once. You bind this descriptor to a texture and then draw using it. Implicitly, the producer side signals when new contents is available, the consumer side signals when it’s time to draw. The metadata, matchmaking and format conversions are kept opaque so that the driver can switch and decide based on the end use case (direct to display, composition, video recording, …) and what’s available.

                                                                                    1. 4

                                                                                      The whole schtick that Nvidia does this out of spite is just short-fused idiocy. It doesn’t take much digging into the presentations that Nvidia has held at XDC2016/2017 (or just by asking aritger/cubisimo,… directly) to understand that there are actual- and highly relevant- technical concerns that make the GBM approach terrible.

                                                                                      I think they may have had more of a point if they had been pushing EGLStreams back when these APIs were initially under discussion, before everyone else had gone the GBM route. And I question the technical merit of EGLStreams - it’s not like the Linux graphics community is full of ignorant people who are pushing for the wrong technology because… well, I don’t know what reason you think we have. And at XDC, did you miss the other talk by nouveau outlining how much Nvidia is still a thorn in their side and is blocking their work?

                                                                                      The narrative Drew and other are playing with here - that there’s supposedly a nice “standard” that Nvidia just don’t want to play nice with is a convenient one rather than a truthful one.

                                                                                      What, you mean the interfaces that literally every other gfx vendor implements?

                                                                                      To be able to figure out how to do this, you pretty much need control and knowledge of the internal storage- and scanout formats of the related devices, and with it comes a ton of politics.

                                                                                      Necessary politics. A lot of subsystems are at stake here with a lot of maintainers working on each. Instead of engaging with the process Nvidia is throwing blobs/code over the wall and expecting everyone to change for them without being there to support them doing so.

                                                                                      Your explanation of how EGLStreams and GBM are different is accurate, but the drawbacks of GBM are not severe. It moves the responsibility for some stuff but this stuff is still getting done. Read through the wayland-devel discussions if you want to get a better idea of how this conversation happened - believe it or not, it did happen.

                                                                                      1. 2

                                                                                        I think they may have had more of a point if they had been pushing EGLStreams back when these APIs were initially under discussion, before everyone else had gone the GBM route. And I question the technical merit of EGLStreams - it’s not like the Linux graphics community is full of ignorant people who are pushing for the wrong technology because… well, I don’t know what reason you think we have.

                                                                                        Well, the FOSS graphics “community” (as in loosely coupled tiny incestuous fractions that just barely tolerate each-other) pretty much represents the quintessential definition of an underdog in what is arguably the largest, most lock-in happy, control surface there is. The lack of information, hardware and human resources at every stage is sufficient explanation for the current state of affairs without claiming ignorance or incompetence. I don’t doubt the competence of nvidia driver teams in this regard either and they hardly just act out of spite or malice - their higher level managers otoh - that’s a different story with about as broken a plot as the corresponding one had been at AMD or Intel in other areas where they are not getting their collective asses handed to them.

                                                                                        Your explanation of how EGLStreams and GBM are different is accurate, but the drawbacks of GBM are not severe. It moves the responsibility for some stuff but this stuff is still getting done. Read through the wayland-devel discussions if you want to get a better idea of how this conversation happened - believe it or not, it did happen.

                                                                                        I have, along with the handful of irc channels where the complementary passive-aggressive annotations are being delivered (other recommended reading is ‘luc verhaagen/libvs blog’ starting with https://libv.livejournal.com/27799.html - such a friendly bunch n’est ce pas?), what I’m blurting out comes from rather painful first hand experiences - dig through the code I’ve written on the subject and the related wayland-server stuff and you can see that I’ve done my homework, if there’s one hour of code there’s one hidden hour of studying/reversing prior use. I’ve implemented both (+ delivered evaluations on three others in a more NDAy setting) and experimented with them since initial proposal and bothered nvidia about it (~dec 2014) - neither deliver what I think “we” need, but that’s a much longer thread.

                                                                                        On GBM: The danger is splitting up an opaque ‘handle’ into a metadata- dependent set of file descriptors, passing the metadata over a side channel, recombining them again with no opportunity for compositor- validation at reassembly. That is validation both for >type/completion<, for >synchronization< and for >context-of-use<. It’s “try and hope” for asynch fail delivery, crash or worse. There’s ample opportunity here for race conditions, DoS and/or type confusions (got hard-to-reproduce such issues with wild writes both in gpu-space and kernel-space in this very layer with Xorg/DRI3 bound as producer via a render node). Look at the implementation of wl_drm and tell me how that approach should be able to bring us back from buffer bloat land to chase-the-beam producer-to-scanout (the VR/AR ideal and 8k/16k HDR bandwidth scaling necessity) - last I checked Mesa could quad-buffer a single producer just to avoid tearing - FWIW, Nvidia are already there, on windows.

                                                                                        What, you mean the interfaces that literally every other gfx vendor implements?

                                                                                        Yes, the Mali and Exynos support for GBM/KMS is just stellar – except “Implements” range from half-assed, to straight out polite gestures rather than actual efforts, so about the quality of compositor ‘implementations’ on the Wayland side. Out of 5 AMD cards I run in the test rig here right now, 2 actually sort of work if I don’t push them too hard or request connector level features they are supposed to offer (10bit+FreeSynch+audio for instance). To this day, Xorg is a more reliable “driver” than kms/gbm - just disable the X protocol parts. I started playing with KMS late 2012, and the experience have been the same every step of the way. If anything, Gralloc/HWC is more of a widespread open “standard”. Then again, Collabora and friends couldn’t really excel selling consulting for Android alternatives if they’d use that layer now could they?

                                                                                        Necessary politics. A lot of subsystems are at stake here with a lot of maintainers working on each. Instead of engaging with the process Nvidia is throwing blobs/code over the wall and expecting everyone to change for them without being there to support them doing so.

                                                                                        and

                                                                                        I think they may have had more of a point if they had been pushing EGLStreams back when these APIs were initially under discussion, before everyone else had gone the GBM route

                                                                                        Again, gralloc/hwc predates dma-buf, and are half a decade into seamless multi-gpu handover/composition. For nvidia, recall that they agree on KMS as a sufficiently reasonable basis for initial device massage (even though it’s very much experimental- level of quality, shoddy synchronisation, no portable hotplug detection, no mechanism for resource quotas and clients can literally priority-invert compositor at will) - It’s the buffer synchronisation and the mapping to accelerated graphics that is being disputed (likely to be repeated for Vulkan, judging by movements in that standard). Dri-devel are as much the monty python ’knights who say NIH’ as anyone in this game: Unless my memory is totally out of whack EGLStreams as a mechanism for this problem stretch as far back as 2009, a time when khronos still had “hopes” for a display server protocol. Dma-buf (GEM is just a joke without a punchline) was presented in 2012 via Linaro and serious XDC @ 2013 with Xorg integration via DRI”3000” by keithp the same year. Nvidia proposed Streams @ 2014, and the real discussion highlighted issues in the span 2014-2015. The talks have then been how to actually move forward. Considering the “almost enough for a decent lunch” levels of resource allocations here, the pacing is just about on par with expectations.

                                                                                    1. 7

                                                                                      To echo the problems described in the article with small Linux distros: there’s this huge problem with not enabling localization (I speak English, and pretty much only English, so yay for me, but I could see it being a problem for others), and not including all documentation.

                                                                                      I can’t remember if it was Alpine or Void, but one of them either doesn’t include the man command in the default installation or doesn’t have man pages for the default package manager, or both, I can’t remember. Obviously a problem. And nothing is more irritating than reading a man page, seeing “look at /usr/share/docs/FOO” and there being nothing in /usr/share/docs.

                                                                                      (And don’t even get me started on texinfo. That shit needs to die.)

                                                                                      1. 3

                                                                                        Void comes with man pages and mdocml.

                                                                                        1. 3

                                                                                          Alpine has no man by default, though easily installed. I gave up shortly after, though, upon discovering there’s no xterm package.

                                                                                          1. 1

                                                                                            Seems to be on the community repo: https://pkgs.alpinelinux.org/packages?name=xterm

                                                                                          2. 1

                                                                                            l10n is definitely on my radar, but it’s a bit of a head scratcher wrt how to implement it correctly in-line with the principles of the distro. I don’t want to include anything on your system you’re not actually using, like l10n for langauges you don’t speak.

                                                                                          1. 1

                                                                                            This sounds exactly like what I’ve imagined a perfect Linux distro to be like. I love it when people who can put in the work get the same ideas :)

                                                                                            Gotta try to help him out if I can. I created #agunix:matrix.org if people wanna join in.

                                                                                            1. 1

                                                                                              Try #agunix on irc.freenode.net, that’s where agunix development takes place.

                                                                                              1. 1

                                                                                                Cool, thanks. Creating a bridge between these should be trivial if we ever need to. Probably not yet, since I’m the only one on the Matrix side :)