1. 3

    It would be fun to hear how the author assumed time zone information was managed - by a committee at the United Nations?

    1. 9

      News is trickling out on https://micronews.debian.org/ as well

      1. 6

        Fun timing - I was just telling myself that I should stop not recommending/advocating Emacs to my colleagues.

        Usually I shy away from doing so, explaining that I like Emacs a lot, but not going further than that. I think primarily because I don’t want the experience of them being disappointed with Emacs and abandoning it after trying it on my recommendation.

        But I need to remind myself that I will be seeing them using Microsoft Visual Studio Code instead, because that certainly has a lot of advocacy… shudder.

        1. 1

          While I don’t actively advocate anyone to switch from any other editor, I do generally share some of the Emacs findings/tricks I discover… it often generates enough interest over time. Folks eventually try it out on their own time and come to me with questions. Having someone accessible to clear initial bumps seems to be pretty handy. It may get folks enjoying the experience sooner and,perhaps make it all that much more sticky.

          1. 2

            This is the same for everything in Programming though. It’s largely something you can learn yourself, but having a network of people to prod for questions is extremely useful.

        1. 1

          Cute idea - it would be fun to be able to run it from a pre-push hook, or something, stopping people from pushing bad commit messages >:-)

          1. 1

            I use hep, followed by yay. For random text I also use ABBA followed by FLAPPA.

            When people are debugging by printing out stuff, I tell them to include some easily recognizable text. There is nothing worse than 10 lines of values and you don’t know which is which… there are enough things to keep in your head when you’re debugging! (And it’s nice to be able to search for it as well.)

            I have started telling people to come up with their own random word - that’s an easy way to tell who left some debug somewhere by accident. Examples that sprintg to mind are are “hatt”, “strawberry” and “popcorn” :-)

            1. 1

              I tried this with my old fav slrn but I do need something that handles HTML format

              https://imgur.com/a/OQXE42V

              1. 2

                You can configure slrn to run articles through html2text and a little s-lang, description at the bottom of the page here: https://feedbase.org/documentation/#slrn

                It isn’t super fast, but it looks pretty good: https://koldfront.dk/misc/lobstersslrn.png

                1. 1

                  From the README, there are two groups:

                  • lobsters - “Multipart HTML/Plain UTF-8 QP”, and
                  • lobsters.plain - that is “Plain UTF-8 QP”

                  Try out lobsters.plain I guess?

                  1. 2

                    I had recently added that. I’m tempted to turn .plain into ISO-8859-1 for the nasty legacy clients.

                1. 3

                  Looks pretty good in Gnus: https://koldfront.dk/misc/gnus/lobstersnntp.png

                  Nice job!

                  Would be even better if it was read/write :-) But since the API doesn’t allow that, it would be nice with a link to the comments on the website, and perhaps also make the story link clickable?

                  How often does it update? I’ve posted this comment and another one on the website, but they don’t seem to have shown up in the nntp-gateway yet…

                  1. 4

                    The link to the object on the site itself in an X-Lobsters header. (There’s some additional X-Lobsters-* headers. Working on adding some more, but it does require schema changes, which probably cause a reset for article numbering if I mess it up.)

                    It updates every hour… or it should, anyways. (Oops, there’s a bug in that. Let me fix it.)

                  1. 10

                    I refer you to The Only M1 Benchmark That Matters - how long does it take to compile Emacs! :-) (Spoiler: the M1/clang does well.)

                    1. 4

                      But how long does it take to compile Vim and save the kids in Uganda?

                    1. 1
                      • DNS (bind9)
                      • Mail (Postfix, sqlgrey, opendkim, opendmarc, Dovecot)
                      • IM - XMPP (ejabberd)
                      • Web
                      • Calendar/Contacts (CalDav)
                      • Atom/RSS to nntp gateway (homemade)
                      • Video-conferincing (Jitsi)

                      All of it on my home server (except Jitsi), with two tiny VPS’s for DNS and mail-server redundancy. Everything on Debian stable.

                      1. 1

                        There is a generic postfix-sasl.conf in filters.d which will catch these, as well as other failed login attempts, if enabled - in case you want not just to trigger on “Password:”

                        1. 10

                          code review often results in me having to amend or squash my commit(s).

                          Why? What is wrong with fixing whatever needs to be fixed in new commits?

                          Sure, amend, squash, modify before you push, but after that, don’t, and you avoid a whole class of problems.

                          You might argue that the history will look “messy”, yes, perhaps, but it also reflects what actually happened, which can be a good thing.

                          1. 19

                            git history should tell a story, i don’t want to see your typos, unless it’s in the main branch, then it’s written in stone

                            1. 3

                              I don’t see why. VC history can be an almost arbitrary mess!

                              The thing which really matters is that you get your job done.

                              As long as you have a decent way to

                              1. find semantically connected commits (e.g. look at the merge of a PR, or a ticket ID in the commit messages) and
                              2. find out who to ask when you have questions about some code (some version of blame)

                              you should be good. At least, that is all I ever needed from a VCS. I would be interested in hearing about other use-cases, though.

                              In general, people are wasting too much time these days cleaning up their commit history.

                              1. 5

                                as somebody regularly doing code archeology in a project that is now 16 years old and has gone through migrations from CVS to SVN to git, to git with people knowing how to rebase for readable histories, I can tell you that doing archeology in nice single-purpose commits is much nicer than doing archeology within messy commits.

                                So I guess it depends. If the project you’re working on is a one-off, possibly rewritten or sunset within one or two years, sure, history doesn’t matter.

                                But if your project sticks around for the long haul, you will thank yourself for not committing typo fixes and other cleanup commits.

                                1. 4

                                  it CAN be, but that’s what we’re trying to avoid
                                  you can get your job done either way, and cleaning up git history doesn’t take a lot of time if you think properly from the beginning. Any additional time i do spend can easily be easily justified by arguing for better documented changes.

                                  1. sure
                                  2. you should not have to ask anyone, some aggregation of context, commit messages, and comments should answer any questions you have

                                  having a mistake you introduced, as well as the fix for that mistake in the same branch before merging into a main branch is just clutter… unnecessary cognitive load. If you use git blame properly, it’s yet another hoop you have to jump through and try to find out the real reason behind a change. Now, there are exceptions. Sometimes i do introduce a new feature with a problem in a branch, and happen to discover it and fix it in the same branch (usually it’s because the branch is too long lived, which is a bad thing). I do, sometimes, decide that this conclusion is important to the narrative, and decide to leave it in.

                                  1. 2

                                    I mean…I would agree in principle except for “cleaning up git history doesn’t take a lot of time”. I think that is only true if you have already invested a lot of time into coming up with your branching and merging and squashing model and another lot of time figuring out how to implement it with your tools.

                                    I have probably more cognitive overhead from reading blog posts on to-squash-or-not-squash et al. than I could get from ignoring “fix typo” commits in a lifetime. ;)

                                    1. 4

                                      “cleaning up source history” is such an ingrained part of my work flow, that seeing you dismiss it because it’s too costly reads similarly to me was, “I don’t have time to make my code comprehensible by using good naming, structure and writing good docs.” Which you absolutely could justify by simply saying, “all that matters is that you get the job done.” Maybe. But I’d push back and say: what if a cleaner history and cleaner code makes it easier to continue doing your job? Or easier for others to help you do the job?

                                      FWIW, GitHub made this a lot easier with their PR workflow by adding the “squash and merge” option with the ability to edit the commit message. Otherwise, yes, I’ll checkout their branch locally, do fixups and clean up commit history if necessary.

                                      1. 1

                                        I could make that argument. But I didn’t because it is not the same thing.

                                        This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history. And - also answering @pilif here - this includes a medium sized (couple million lines of code), 30 year old project that had been rewritten in different languages twice and the code base at that time consisted of 5 different languages.

                                        (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                        1. 5

                                          But I didn’t because it is not the same thing.

                                          Sure, that’s why I didn’t say they weren’t the same. I said they were similar. And I said they were similar precisely because I figured that if I said they were the same, someone would harp on that word choice and point out some way in which they aren’t the same that I didn’t think of. So I hedged and just said “similar.” Because ultimately, both things are done in the service of making interaction with the code in the future easier.

                                          This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history.

                                          I guess it seems obvious to me. And it’s especially surprising that you literally haven’t found any use for it, despite people listing some of its advantages in this very thread! So I wonder whether you’ll even see my examples as valid. But I’ll give a try:

                                          • I frequently make use of my clean commit history to write changelogs during releases. I try to keep the changelog up to date, but it’s never in sync 100%, so I end up needing to go through commit history to write the release notes. If the commit history has a bunch of fixup commits, then this process is much more annoying.
                                          • Commit messages often serve as an excellent place to explain why a change was made. This is good not only for me, but to be able to point others to it as well. Reading commit messages is a fairly routine part of my workflow: 1) look at code, 2) wonder why it’s written that way, 3) do git blame, 4) look at commit that introduced it. Projects that don’t treat code history well often result in disappointing conclusion to this process.
                                          • A culture of fixup commits means that git bisect is less likely to work well. If there are a lot of fixup commits, it’s more likely that any given commit won’t build or pass tests. This means that commit likely needs to be skipped while running git bisect. One or two of these isn’t the end of the world, but if there are a lot of them, it gets annoying and makes using git bisect harder because it can’t narrow down where the problem is as precisely.
                                          • It helps with code review enormously, especially in a team environment. At $work, we have guidelines for commit history. Things like, “separate refactoring and new functionality into distinct commits” make it much easier to review pull requests. You could make the argument that such things should be in distinct PRs, but that creates a lot more overhead than just getting the commit history into a clean state. Especially if you orient your workflow with that in mind. (If you did all of your work in a single commit and then tried to split it up afterwards, that could indeed be quite annoying!) In general, our ultimate guideline is that the commits should tell a story. This helps reviewers contextualize why changes are being made and makes reviewing code more efficient.

                                          (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                          Well, the charitable interpretation would be that I do it because I find it to be a productive use of my time. Just like I find making code comprehensible to be a good use of my time.

                                          And no, clean source history of course requires dedicated effort toward that end. Just like writing “clean” code does. Neither of these things come for free. I and others do them because there is value to be had from doing it.

                                          1. 1

                                            Thanks, this is more useful for discussing. So from my experience (in the same order):

                                            1. I could see this as being useful. I simply always used the project roadmap + issue tracker for that.
                                            2. Absolutely, I wasn’t trying to argue against good commit messages.
                                            3. I understand that fix-up commits can be a bit annoying in this respect so if you can easily avoid them you probably should. On the other hand I need git bisect only very rarely and fix-up commit are often trivial to identify and ignore. Either by assuming they doesn’t exist or by ignoring the initial faulty commit.
                                            4. I am totally in favor of having refactoring and actual work in separate commits. Refactorings are total clutter. Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                            I mean, maybe this is just too colored by how difficult I imagine the process to be. These arguments just seem too weak in comparison to the cognitive burden of knowing all the git voodoo to clean up the history. Of course if you already know git well enough that trade-off looks different.

                                            1. 1

                                              The git voodoo isn’t that bad. It does take some learning, but it’s not crazy. Mostly it’s a matter of mastering git rebase -i and the various squash/fixup/reword/edit options. Most people on my team at work didn’t have this mastered coming in, but since we have a culture of it, it was easy to have another team member hop in and help when someone got stuck.

                                              The only extra tooling I personally use is git absorb, which automates the process of generating fixup commits and choosing which commits to squash them back into automatically. I generally don’t recommend using this tool unless you’ve already mastered the git rebase -i process. Like git itself, git absorb is a convenient tool but provides a leaky abstraction. So if the tool fails, you really need to know how to git rebase yourself to success.

                                              It sounds painful, but once you have rebase mastered, it’s not. Most of my effort towards clean source history is spent on writing good commit messages, and not the drudgery of making git do what I want.

                                              It sounds like we are in some agreement on what’s valuable, so perhaps we were just thinking of different things when thinking about “clean source history.”

                                              Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                              Indeed, it is. Often because the code needs to be modified in a certain way to make it work. That’s why our commit history guidelines are just guidelines. If someone decided it was more convenient to just combine refactoring and semantic changes together, or maybe they just didn’t plan that well, then we don’t make them go back and fix it. If it’s easy to, sure, go ahead. But don’t kill yourself over it.

                                              The important bit is that our culture and guidelines gravitate toward clean history. But just like clean code, we don’t prioritize it to the expense of all else. I doubt few others who promote clean code/history do either.

                                              N.B. When I say “clean code,” I am not referring to Bob Martin’s “Clean Code” philosophy. But rather, just the general subjective valuation of what makes code nice to maintain and easy to read.

                              2. 5

                                For a stacked diff flow, this is necessary https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/

                                1. 4

                                  If you are just going to duplicate your original commit message for the new work, why not amend the original commit? Branches are yours to do with as you please until someone else may have grabbed the commit.

                                  1. 2

                                    Sure, amend, squash, modify before you push

                                    It’s not about push. It’s about sharing. I push to many branches that aren’t being or intended to be shared with others. Thus, it’s okay to rewrite history and force push in those cases.

                                  1. 11

                                    I use Emacs’ vc-annotate (C-x w g) to get the initial blame shown, and then I can inspect the commit log (l) and diff (=), and I can jump to the commit of the current line (j). To move to the previous commit (p) is then easy, making it possible to trace the history back, while showing log and diff when necessary, as I am jumping further back.

                                    1. 1

                                      I also like that a lot, but I’ve found it can’t cross merge commits (any commit with more than 1 parent). Is there a way around that?

                                    1. 3

                                      I wonder where the particular new limit of 2468 comes from. A 64-bit timestamp should allow dates hundreds of billions of years into the future, so clearly that’s not exactly the data structure being used to store timestamps here.

                                      1. 11

                                        The article gives the answer:

                                        This “big timestamps” feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter

                                        and

                                        a new XFS file-system with bigtime enabled allows a timestamp range from December 1901 to July 2486

                                        Wolfram Alpha calculating 2^64 nanoseconds from December, 1901 gives July, 2486: https://www.wolframalpha.com/input/?i=2%5E64+nanoseconds+from+1901-12-31

                                        Note: it is 2486, not 2468.

                                      1. 35

                                        Videos are very slow for conveying this type of information - the text could just have said:

                                        • Install ublock origin, if you haven’t already
                                        • Click the ublock origin icon
                                        • Click the “Open the dashboard” button
                                        • Under “Annoyances” turn on “EasyList Cookie”
                                        • Click “Apply changes”
                                        1. 5

                                          My apologies for inconveniencing you by conveying information too slow(;

                                          Having said that, a video is quiet convenient for conveying information on a Youtube channel - and to show exactly the kind of problem with cookie popups that I wanted to show: no way to opt out, in the way of reading the content, that the tracking stops when set up correctly.

                                          1. 4

                                            You can do both, just put a TLDW in the description…

                                          2. 3

                                            While I agree and generally prefer a good blog post a video, it all comes down to a matter of opinion. Some people just prefer watching videos over reading a post for whatever reason. I’ve seen people asking specifically for videotutorial help before.

                                            In this case the video can help people to know exactly what steps and movements to follow, to easily find what the user needs to do, with a concrete example.

                                          1. 1

                                            The Perl bashing in the article feels quite outdated to me.

                                            Also:

                                            C++ could have beaten Perl by 10 years to become the world’s second write-only programming language

                                            Wikipedia lists C++ as being from 1985 and Perl from 1987, so I guess C++ would have done so by two and not ten years. Unless it is supposed to be a base two joke.

                                            1. 5

                                              It is unfortunately in keeping with the general style of the entire article: a cheap polemic that paints people who generally don’t agree, and their apparently “conservative” choices, as some kind of pantomime villain. The only footnote is a reference to another polemic in a similar vein, from nearly a decade prior.

                                            1. 19

                                              The last footnote includes the conclusion for practical use:

                                              “To be fair, the asymptotic behaviour of Bloom’s original bound is consistent with this updated definition, so the impact is more on an issue of pedantry rather than for practical applications.”

                                              1. 3

                                                The article would have been a lot more constructive if it gave some examples of better alternatives for the various projects mentioned.

                                                1. 18

                                                  Are you suggesting they should say something like

                                                  What To Use Instead?

                                                  To replace GPG, you want age and minisign.

                                                  To replace GnuTLS or libgcrypt, depending on what you’re using it for, you want one of the following: s2n, OpenSSL/LibreSSL, or Libsodium.

                                                  which they said at the bottom of the article?

                                                  1. 2

                                                    Except Age/Minisign is not a GPG replacement?

                                                    1. 5

                                                      Age replaces file encryption. Minisign replaces signatures.

                                                      Read https://latacora.micro.blog/2019/07/16/the-pgp-problem.html

                                                      A Swiss Army knife does a bunch of things, all of them poorly. PGP does a mediocre job of signing things, a relatively poor job of encrypting them with passwords, and a pretty bad job of encrypting them with public keys. PGP is not an especially good way to securely transfer a file. It’s a clunky way to sign packages. It’s not great at protecting backups. It’s a downright dangerous way to converse in secure messages.

                                                      Back in the MC Hammer era from which PGP originates, “encryption” was its own special thing; there was one tool to send a file, or to back up a directory, and another tool to encrypt and sign a file. Modern cryptography doesn’t work like this; it’s purpose built. Secure messaging wants crypto that is different from secure backups or package signing.

                                                      You may think you want some cryptographic Swiss Army knife that “truly” replaces GPG, but what you really want is secure, single-purpose tools for replacing individual use cases that use modern cryptography and have been extensively reviewed by cryptography and security experts.

                                                      1. 2

                                                        What tool handles the identity and trust mechanism that GPG providing?

                                                        With the multi-tool approach, the user has to re-establish the web of trust every time and learn about each disconnected tools as well.

                                                        1. 2

                                                          What tool handles the identity and trust mechanism that GPG providing?

                                                          I hear webs of trust don’t work. Not sure why, but I believe it has to do with the difficulty of changing your root key if it ever becomes compromised.

                                                          Otherwise, maybe something like minisign, or even minisign itself, could help?

                                                          1. 1

                                                            Trust in what context?

                                                            For code-signing, I designed https://github.com/paragonie/libgossamer

                                                    2. 1

                                                      Totally agreed. But hey, a blog article poo-pooing a thing is much easier to write than one constructively criticizing it and offering solutions. And who has the time these days?

                                                      On a related note, it was once a guaranteed way to get your latest blog article to the top of the orange site if the title contained something like, “Foobar: You’re Doing it Wrong” or “We Need Talk About Foobar”. Phrases like this are the equivalent of “One Weird Trick” headline clickbait for devs.

                                                      1. 8

                                                        Pretty sure the article offers solutions. It’s at the very bottom though.

                                                    1. 3

                                                      Microsoft Outlook is definitely worse than Thundebird when it comes to handling inline images with MIME.

                                                      1. 0

                                                        I like how they effortlessly combine a user-unfriendly GUI with a user-unfriendly community.

                                                        1. 12

                                                          Most 9front users are not unfriendly in my limited experience, in fact some of the nicest, most knowledgeable and patient people I have seen use 9front.

                                                          1. 8

                                                            I disagree. As a recent newcomer to Plan9, and 9front, I found their documentation and IRC support very friendly indeed.

                                                            Edited: Also, their GUI is not at all user-unfriendly. It’s not terribly discoverable, but once you know how to drive it, it’s incredibly user-friendly and powerful. It may seem like a strange nit to pick, but user-friendliness is not the same as discoverability. They’re orthogonal, and conflating the two has led to years of brain-dead ‘consumer’ UIs.

                                                            1. 5

                                                              user-friendliness is not the same as discoverability. They’re orthogonal, and conflating the two has led to years of brain-dead ‘consumer’ UIs.

                                                              Yeah, I think there’s a missed opportunity somewhere that there’s a difference between newcomer-friendliness (as in “can anyone pick this up without studying the manual”) and user-friendliness (as in “is it consistent and doesn’t drive you nuts?”, “is it powerful?”, “does it save you time?” etc).

                                                              1. 4

                                                                The most obvious missed opportunity is, as usual, the opportunity to learn from people who’ve been working hard at these very issues for, oh, fifty years or so. The relationship between the effort needed to use a system and the results that can be obtained with a given level of effort, and the learning curve that connects beginners and expert users, has been painstakingly studied from many angles in the HCI community. There are even slogans like “low floors, high ceilings”… yet ignorance abounds.

                                                                If anybody’s going to actually empower actual users, it will have to be hobbyists like the 9front folks. Consumer technology has long been pulling in the opposite direction; computing professionals are largely caught up in geek machismo and rationalization while serving our corporate masters; and academics are a cowardly lot locked up behind paywalls and tenure politics.

                                                                1. 1

                                                                  computing professionals are largely caught up in geek machismo and rationalization while serving our corporate masters

                                                                  Not to mention fashion, and wanting to be identified as “creatives”.

                                                                  I remember when Microsoft lost their monopoly courtesy the Web, and almost unanimously, software developers up and handed that monopoly to Apple :(

                                                                  Now, maybe, with Apple’s move to ARM (and possibly almost-completely nerfed MacBooks), we’ll have another chance.

                                                                  (Sadly my current bet is that we’ll choose “Linux layer on MS Windows”, marking the completion of a truly epic embrace, extend, extinguish cycle).

                                                                  1. 1

                                                                    Until we’re a real engineering profession, like with mandatory membership in professional societies that can independently decide and enforce standards of ethical conduct and “best practices” that aren’t just fads, we’re all basically just overpaid labor. Craftspeople with contracts at best, unorganized day-laborers more often. I hope to see it happen in my lifetime, but I’m not exactly holding my breath.

                                                                    For an eye opening, read up on the history of the engineering professions, starting with civil engineering in the late 18th and early 19th century. We have a long way to go.

                                                                    1. 1

                                                                      I’m quite well versed in the history, and I’m still not convinced that it’s the right approach. We’re not engineers, for the most part, and that’s entirely reasonable. (I have an entire soapbox rant about the use of the term engineer to describe programmers who don’t have engineering degrees, and who aren’t doing engineering. Like myself, for over two decades).

                                                                      There’s already been some discussion on licensing for programmers on Lobste.rs:

                                                                      https://lobste.rs/s/91khhj/why_are_we_so_bad_at_software_engineering#c_lirfgi

                                                                      1. 2

                                                                        Fair enough. I suppose I could respond with this other post or let you hash it out with @hwayne who has Strong Opinions on the matter.

                                                                        But I’m not saying every computing professional is (or should be) an engineer, any more than every medical professional is a doctor or every legal professional is an attorney. However, I do feel that the lack of an effective and independent governing body for those who are doing engineering, with all the consequences it entails, has inflicted an unfortunate amount of collateral damage on the general public. I had hoped that the ACM would fill that role, but so far they’re way too academic. In practice, inasmuch as any one has stepped up, it’s been the IEEE gradually colonizing our space.

                                                                        1. 1

                                                                          However, I do feel that the lack of an effective and independent governing body for those who are doing engineering, with all the consequences it entails, has inflicted an unfortunate amount of collateral damage on the general public.

                                                                          Serious question: what do you consider “doing engineering”?

                                                                          As one example of the difficulty: a litmus test could be working on life- or safety-critical software. So, say, not Kubernetes. But then you see the designers of B-series bombers using Kubernetes to run their system software. So … should anyone contributing to Kubernetes be a licensed engineer?

                                                                          1. 3

                                                                            I doubt there’s a crisp line between engineering and mere “developing” (coding, sysadmin-ing, etc). Also, as you point out, trying to grade the seriousness of a job based on the potential consequences of a mistake, per-incident, is pretty intractable. But it’s relatively easy to measure adoption, and that at least gives a sense of the breadth (if not depth) of the responsibility. If everybody’s going to use k8s (shudder) then yeah, those devs are doing engineering and should be held to a higher standard than if they were doing a one-off bespoke automation suite internal to some firm. Regarding depth, individuals making the decision to adopt dependencies have heavier responsibilities too. The aerospace and defense industries have a staggering amount of bureaucracy in their engineering processes, I would say to compensate for inadequate professional governance.

                                                                            (Longest and most off-topic thread EVAR!!!!1! Personal best)

                                                                            1. 1

                                                                              Haha :). Derailing threads like the ARTC derails trains … anyhow …

                                                                              How would you handle that transition? Imagine I produce an open source library that suddenly sees massive adoption. Goes from a few users to thousands, then maybe tens or hundreds of thousands, in quick succession.

                                                                              Should I, as a non-engineer, be allowed to continue to support the project? Must I find registered engineers to join the project? Should I allow source contributions from non-engineers? Who fits the bill for all this?

                                                                              It’d have a massive chilling effect on open source software and innovation in general.

                                                                              1. 1

                                                                                Since at this point the party’s been over for a while, and I’m really just waving my naked opinion around… let me flip it back at you. Maybe we need more sustainable and responsible funding models, rather than just pillage-and-profit? And, is the sudden massive industrial adoption of hobbyist-grade software really something we want to encourage? Hell, for that matter, is “innovation”? I don’t really want a lot of rapid innovation in my critical infrastructure, thanks.

                                                                                But, you’re pointing out symptoms of an immature field under an unhealthy amount of pressure. My opinion doesn’t really matter, of course. I just think that rising public awareness (and inevitably “outcry”) about the inherent dangers, will eventually force some form of change. Again, probably not overnight. But it’s a pattern we’ve seen play out before.

                                                            2. 7

                                                              user-unfriendly community

                                                              How so? Their brand of not holding your hand is pretty well-known.

                                                              1. 7

                                                                Well, there was a long time that they ironically used Nazi imagery to promote their stuff. I don’t think there’s necessarily anything wrong with this “joke”, but I also understand people who found this content at the very least extremely unnerving (as I do personally as Jew).

                                                                It seems they added and anti-Nazi symbol that links to Nazi punks fuck off, which I applaud, but the fact that they’ve had to do this I think speaks volumes about who their artwork attracted.

                                                                I happen to really like Plan 9 and the effort 9front has put in to expand on the system, but I think to a large extent the damage has been done in terms of attracting normal every day users.

                                                                1. 12

                                                                  As a grandchild of holocaust survivors, and a fairly active committer on 9front, I don’t recall anything that made me uncomfortable – though, there’s a relatively dark sense of humor about the project. You’re allowed to dislike dark humor.

                                                                  but the fact that they’ve had to do this I think speaks volumes about who their artwork attracted.

                                                                  Hm? I don’t recall any incidents that needed response – it’s just a general sentiment.

                                                                  I think to a large extent the damage has been done in terms of attracting normal every day users.

                                                                  The first image you’ll see if you look at our user-facing documentation is this: http://fqa.9front.org/goaway.jpg.

                                                                  1. 4

                                                                    … which, to be perfectly frank, was one of the things that attracted me to 9front. That, and a quick browse through the propaganda page, convinced me that I’d likely enjoy the ambience.

                                                                  2. 7

                                                                    Plan 9 is an operating system that doesn’t support a web browser. Normal every day users should not under any circumstances try to use Plan 9, and their branding helps to discourage such users.

                                                                    1. 3

                                                                      I think netsurf is now supported?

                                                                      1. 3

                                                                        Cool, thanks for pointing out the netsurf port. Which is still a work in progress, according to the readme.

                                                                    2. 7

                                                                      but the fact that they’ve had to do this I think speaks volumes about who their artwork attracted.

                                                                      Seems more likely they did it to disambiguate the admittedly dark sense of humor for fellows like yourself than because of anyone being attracted to it. Or perhaps they added it because they do want Nazi’s to fuck off, not quite sure why this is being held against them.

                                                                      1. 4

                                                                        I’m not personally holding anything against them, they can have their project with their inside jokes and I think that’s perfectly good for them. And for anyone who joins in on the joke.

                                                                        For the record, I happen to like extremely dark jokes. Even jokes about the Holocaust occasionally. But i don’t think that dark humor is going to attract a lot of people to your operating system. Also, I can like dark humor and find their jokes not funny. A picture of hitler with a joke I don’t find funny in the caption is just a picture of hitler, and to me that would seem weird and out of place.

                                                                        I just happen to think that it’s indicative of a laisez fair attitude towards being generally marketable or something that a majority of casual observers would feel enticed to use. And again, I don’t think there’s anything WRONG with this, just that the way the present themselves is slightly abrasive, and at one point was even more than slightly abrasive.

                                                                      2. 5

                                                                        Mozilla used loads of Soviet-styled artwork in their heyday, that did not seem to make people shun them?

                                                                        Note to 9front: use Genghis Khan-themed artwork next time. He killed more people than the Nazis (about 40 million which amounted to ~11% of the world’s population) but most people won’t know that. You can have edgy images of mass murderers without getting people all riled up.

                                                                        1. 1

                                                                          Oh, Mozilla took some flak for that. Which was hilarious, but some people definitely were offended.

                                                                          It’s worth reading the entire story, as told by jwz - here’s a central quote in this context:

                                                                          We had to convince them that these “open source” people weren’t just a bunch of hippies and Communists.

                                                                          To that end, the branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters.

                                                                          And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts; it was not used only by the Soviets and the Chinese, but also by US in their own propaganda, particularly in recruitment posters and just about everything the WPA did, and even by the Red Cross. So if you looked at our branding and it made you think of Communism, well, I’m sorry, but that’s just a deep misunderstanding of Modern Art history: this is merely what poster art looked like in the 1930s, regardless of ideology!

                                                                          That was complete bullshit, of course. Yes, I absolutely branded Mozilla.org that way for the subtext of “these free software people are all a bunch of commies.” I was trolling.

                                                                          I trolled them so hard.

                                                                          I had to field these denials pretty regularly on the Mozilla discussion groups; there was one guy in particular who posted long screeds every couple of weeks accusing us of being Nazis because of the logo. I’m not sure he really understood World War II, but hey.