1. 2

    How would humor be different in this language? You can’t hide the punchline while you’re setting it up. Are there jokes that are just as funny read backwards as forwards?

    1. 2

      I guess it would be somewhat similar as looking at a single frame cartoon: https://www.google.com/search?q=single+frame+cartoon&tbm=isch

      I am sure humor would work differently in this language, but it might introduce alternative ways to express it?

      1.  

        I’d also add that, while Sapir-Whorf is in general…well, um, provably wrong, to be blunt…there are certain things, especially around humour, that are language-dependent. Puns are only available in some languages, for example; German can’t really do them. Or they’re available spoken, but not written (for example, Japanese, which has many cognates, but uses a Chinese writing system in part to avoid ambiguity in written forms).

        In this case, I can imagine an almost reverse of the Japanese situation: some of these sentences would be recognizable as drawings, and could have a unique form of written-only pun depending on how that worked out.

        1.  

          German can do puns. Maybe Germans can’t, though. ;)

          1.  

            I speak German okay, not great, so I confess I’m forwarding what I was told. Can you give me a German pun? I love them in English.

            1. 5

              Germans can definitely do puns, and puns are in fact pretty common in German (though generally considered to be “lame” jokes). The German name is Kalauer. A classic one: “Es wird nie wieder ein Kalauer über meine Lippen kommen, und wenn du lauerst, bist du kahl wirst.” (Of course it doesn’t work in English, but the translation would be “Never again shall a pun cross my lips, even if you lurk till you turn bald”).

              1.  

                Ooh, nice! Thanks! I’ll need to ask my buddy what he actually meant.

              2.  

                Most famous German pun (at least among English speakers): https://genius.com/Rammstein-du-hast-lyrics

                The phrases ‘du hast’ and ‘du hast mich’ when spoken can mean either ‘you have’ or ‘you hate’ and ‘you have … me’ and ‘you hate me’ respectively. When written hate is spelled differently, i.e. hast -> hasst.

                In effect the song tricks the hearer into believing that the singer is accusing them of being hateful towards him. Only when the complete sentence is sung is it clear that the much tamer meaning ‘You asked me’ is meant the whole time.

                I am not a huge fan but its such a famous example I thought it worth bringing up.

                1.  

                  Google for “Wortspiele”.

                  Wikipedia has a few: https://de.m.wikipedia.org/wiki/Wortspiel

            2.  

              That’s a good comparison! I think Scalar families is in part what prompted this thought. You could make the glyph for “big” be comically big, like 10x bigger than the rest of the sentence.

              Maybe you could show irony by making a big glyph that says “small”, or vice versa. Like the trope of a big guy named “Tiny”.

            3.  

              An ironic situation is ironic no matter which order you learn about it.

            1. 22

              This is a classic Microsoft move. It’s been done exactly the same with other pieces of software in the .NET Core community recently.

              Microsoft won’t embrace any outsider technology. Instead, they’ll build their copycat, and expect the rest of the world to embrace them.

              1. 7

                Microsoft won’t embrace any outsider technology. Instead, they’ll build their copycat, and expect the rest of the world to embrace them.

                There was a long discussion about similar situation with Autofac strangled, I expect ImageSharp to get competition soon, etc. (Btw. ImageSharp has a pretty nice API, I really liked working with it), and probably others as well.

                The really bad part is the rest of the world automatically flocks around MS tech, even if it is technologically inferior (eg. System.ServiceModel.SyndicationFeed with a useless common abstraction for RSS/Atom feeds with terrible API, vs. the CodeHollow.FeedReader which was a breeze. This is just one example from the top of my head)

                I try to use 3rd party tech on .net, because I have bad experience with MS APIs. Often badly desiged, and constantly in flux, needing constant rewrites, while I usually can get better APIs wit lover maintenance burden for the cost of some performance (or not), and usually with less (useless) features.

                1. 4

                  Can relate to this. Worked 4 years with Microsoft technologies. Using Microsoft’s own libraries was always painful experience, and it was better to use some other third party library. Every single time.

                  In four years had to learn how to start a new net core project like 5 times.

                  1. 1

                    If you don’t mind me asking, where did you go after MS? I moved to Ruby on Rails myself (nine years ago now!).

                    1. 1

                      After that, like a year ago, moved to the Java world. The company I’m working for has a Vert.x+Groovy monolith, being split up in Spring Boot+Java microservices. Not my dream stack but can be productive on it, and I’m starting to understand the Spring mindset, so, happy with it :)

                2. 5

                  And yet people constantly insist that Microsoft is different now… a good, benign Microsoft that embraces open source!

                  And they are still pulling these kinds of shenanigans all the time.

                  1. 2

                    It’s in their DNA, it seems. Little has changed since the days of The Halloween Documents.

                    http://www.catb.org/~esr/halloween/

                  2. 5

                    And to Stac Electronics with their Stacker compression software in 1993. MS was looking to acquire, then didn’t, and released MS-DOS 6.0 with DoubleSpace compression, developed in-house.

                    This leopard hasn’t really changed its spots.

                    1. 3

                      Just like when Apple stole Duet Display and F.lux.

                      1. 3

                        Duet feels very different to me; using iPad displays as secondary Mac displays had been a major feature request since literally the very first ones came out. I remember talking with people about this when the iPad was new, way back at Fog Creek at latest, which would put these discussion at least as far back as 2014. Duet didn’t even launch until 2015. I’m not saying they shouldn’t be upset, but that one felt a bit obvious to me. And at any rate, the way screen sharing works on recent iPadOS versions is honestly pretty different from what Duet does. There’s overlap, of course, but I don’t feel (for better or worse) that Duet got directly cloned, nor do I feel like it was such an innovative concept that the authors can say “no one coulda thought of this!”.

                        F.lux, and AppGet, and (just to throw in an oldie) Sherlock, feel very different to me. F.lux was, at least as far as I’m aware, a brand-new concept that didn’t have precedents and certainly didn’t feel “obvious” to me; Apple integrating it was a big deal, and felt like a rip-off. This is a case where an app did something people hadn’t been asking for, and Apple cloned the concept fairly directly.

                        AppGet and Sherlock are different from F.lux, but end up feeling stolen because they’re both examples where there’s a clear need for something in that space, but the relevant companies directly cloned the competitor. Apple had been working on better search since the abandoned Copland project, but the level to which Sherlock copied Watson in both appearance and name just felt gross to me at the time. Likewise, copying the entire way AppGet works, and calling the result WinGet, just feels…well, duplicitous, at best.

                        I’m not trying to be an Apple apologist here, but I think lumping Duet into this discussion starts to miss the point a bit.

                    1. 12

                      FWIW, while I do like this list, I feel compelled to note that Fog Creek, as late as 2014, did not actually do 2, 3, 5, 6, or 7. Sure, any given project, at any given point, might’ve, but they certainly weren’t part of the culture. I have no idea what Glitch does these days on that front.

                      1. 3

                        So, did anyone bring this up at the lunch table or anything?

                        “Hey about that Joel test, people on the internet actually think we do this stuff!”

                        [Hearty laughter all around table]

                        “Yeah those yokels will believe anything”

                        Maybe a “fire and motion” tactic to bog down would be competitors?

                        https://www.joelonsoftware.com/2002/01/06/fire-and-motion/

                        1. 17

                          Yes and no.

                          First, “we didn’t do them” does not mean “we didn’t want to do them.” We, like most teams in real life, valued and tried to do things, but might fail because the reality of shipping got in the way. So, while I was there, at any given point, you probably could’ve found us doing all twelve of these…but not all in one project. And when one of these points slid by for long enough, it gradually dropped off the radar.

                          Take FogBugz. FogBugz routinely had one-step builds, but not one-step deploys. Deploys, for a long time, consisted of manually, locally building a COM object, copying it to each web server, and registering it by hand with regsrv32. This continued after FogBugz On Demand was launched, and we did have outages from this. (One I remember specifically was Copilot getting taken down one day because someone had reordered database columns in SQL Server, by hand, for better aesthetics. They were in there in the first place because Copilot’s schema management at the time could only add columns, not delete, and they wanted to delete some extraneous ones.) Does that count as a violation of making a build in one step?

                          Copilot never had daily builds, even when Joel was directly overseeing us. I don’t think Kiln did, either. But we had one-click builds and would deploy fairly often. That’s definitely a literal violation of making daily builds, but maybe it doesn’t count? (Especially when I could trivially have cron’d daily builds for both!)

                          I could go on. Initial phases of projects often had “specs,” but they were rarely followed, and the finished project was often wildly different. Specs were rarely updated as the product was, so the result is that they were basically frozen-in-time musings about what we thought maybe things should look like. I actually have the Kiln 1.0 Spec in my office, and just looked at it, out of curiosity. A lot of these features did ship, but quite a few worked differently, a few so differently I’m not entirely sure it counts. And I don’t remember this spec being updated once we got going. (Something kind of evidenced by the fact that it was distributed on paper, in a binder, to the team.) Likewise, we had testers, but they couldn’t test the entire project. We kind of dogfooded, which kind of avoided this, but our dogfooding was done on a special server running a special build of the product that was built in a special way, and so its bug collection would frequently be different than what customers saw. And so on and so forth.

                          I am not saying I don’t think the Joel Test has value. I actually think it does: specifically, I think it’s a great list of some important things I sure hope most dev teams are trying to do. (Except item 11. That can go die in a fire.) My issue with the Joel Test is that, in real life, I have never seen any single company actually pass. That’s fine if it’s an aspirational target, but too often it’s instead used as a way to judge. (StackOverflow Careers, in fact, at least used to do this explicitly, showing the Joel Test rank for each company. Fog Creek inevitably had a 12 because of course it did, incidentally.)

                          I think the only one of these I genuinely found comical, and I do remember making fun of, is “Fix bugs before you make new ones.” If we actually did that, FogBugz 6 for Unix would never have shipped. “Keep your bug count from climbing too high” was definitely A Thing™, but the reality is that if you can ship, I dunno, file transfer in Copilot 2, but you still have ghosting issues, you’ll ship it.

                          1. 5

                            This is such a good comment that provides that provides a foundation for empathy for teams that try to perpetually improve their own process, even while publishing publicly about their process. Sometimes “the grass is greener” even applies to a software shop you might have idolized in your youth. As I did, for Fog Creek. Thank you for sharing these details!

                            I feel like “The Joel Test” was a real accomplishment at the time. These days, its lasting impact is much more “meta” than “concrete” – simply the idea that you should evaluate the “maturity” of a software team by the ubiquity of their (hopefully lightweight) processes, and the way it assists programmers in shipping code. I could even make a “2.0” version right now, modernized for 2020. I left some unchanged.

                            1. Do you use git or another distributed VCS and is it integrated with a web-based tool?
                            2. Can you run any project locally in one command?
                            3. Can you ship any project to production in one command? (Or, do you use continuous integration?)
                            4. Do you track bugs using an issue tracker to which everyone has read/write access?
                            5. Do you tame your bug count weekly?
                            6. Do you have a real roadmap and is there rough team-wide agreement on what it is?
                            7. Does the value of a feature get elaborated in writing before the feature is built and shipped?
                            8. Do programmers have quiet working conditions?
                            9. Do you use the best tools money can buy?
                            10. Do you have separate alpha/beta/staging environments for testing?
                            11. Do new candidates write code during their interview?
                            12. Do you watch users using your software and analyze usage data?
                            13. Does your team dogfood early builds of the next version of your software?
                            1. 3

                              (Except item 11. That can go die in a fire.)

                              Are you talking about the specific interviewing practices that Joel recommends (e.g. his Guerrilla Guide), or writing code during interviews at all? I do think whiteboard coding should die in a fire (even for people who, unlike me, can actually do it; see my profile). But writing code on an actual computer seems a lot more reasonable.

                              1. 3

                                I don’t like “whiteboard” coding interviews, but I do like basic coding interviews with a real development environment and think they should be a requirement for programming teams.

                                1. 3

                                  White-boarding should definitely die. But I’m not sure I like coding in real time, either. Code submissions sure—especially if there’s a good write-up of your approach. But coding on a foreign laptop with someone staring at you is not how most people code, and I’ve seen great devs flail in this situation, and (when testing this technique) reject candidates pass. So the signal to noise just seemed really, really low.

                                  Nowadays, I do a take-home and then do behavioral and structural interviews. That seems to work far more reliably.

                                  1. 2

                                    interviewing practices that Joel recommends (e.g. his Guerrilla Guide)

                                    I clicked through to that when I read the article, and I have to say I disagreed with a lot of what I read. For example:

                                    Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it. In some situations it may be completely impossible to fire anyone.

                                    Maybe this is different in the US. Here in the UK, the norm is to start everyone with 3 months probation, with a week’s notice during that period. If during the 3 months you decide they’re not a good hire, you just let them know, pay them their weeks notice (you wouldn’t want them to actually work for the remainder of the week) and you’re done. The risk of litigation is very low, unless you do something stupid. There is a small cost associated with trying someone out and letting them go, but you get to find all the good candidates who haven’t devoted years to studying interviewing.

                                    recursion (which involves holding in your head multiple levels of the call stack at the same time)

                                    In my mind, that’s exactly the opposite of what recursion is about. Using recursion allows you to take a problem and focus on a tiny bit of it, without too much big picture thinking. For example, if you’re recursing over a tree, you don’t have to worry about the different levels of the tree: you just focus on what to do with the current node, and pass the remaining subtree on to the next level. As long as you end up with a base case (which is usually fairly obvious) eventually, there really isn’t a lot of complexity involved.

                                    Look for passion. Smart people are passionate about the projects they work on. They get very excited talking about the subject. They talk quickly, and get animated. Being passionately negative can be just as good a sign.

                                    I would say a bit of passion is good, but people who are too passionate have difficulty working as part of a team. They want it to work just so, and don’t appreciate their manager or the customer telling them that it needs to work a different way. You don’t want to work with someone who is sulking or trying to undermine things because they didn’t get their way on a subject they care deeply about. Someone whose main motivation is their passion may also lose interest if assigned tasks which are necessary but not directly related to their area of interest.

                                    The article also seems to change its tone half way through: At the start, he’s determined to only hire the superstars. Later, he wants the far more modest “smart and gets things done”. It depends on your definition of superstars, but often the term is used for someone who can produce amazing work, but isn’t really a team player and can’t deal with the more mundane aspects (like getting stuff done).

                                    1. 3

                                      Joel’s posts are relatively dated at this point. When he wrote the guerrilla guide, it was definitely uncommon to have probation periods in the UK and these are a relatively new phenomenon.

                                      At the time, getting rid of bad hires was incredibly difficult in the UK and EU, compared to the US. I’d consider bad hires to not be doing something egregious like assaulting other staff, but are those with behaviour that impacts forward progress. The type that would require quite a long chat with a lawyer to explain. Fog Creek’s was based in New York which may also have influenced Joel’s writing. Different states have different provisions in employment law. Without research, I suspect that people hired in NY have much more protection than someone hired in Florida, or even here in Pennsylvania. The canonical example is California which has significant protections for employees.

                                      I’m glad you picked up on the “superstars” part. I don’t know if Joel would consider this a mistake, but his writing has been misinterpreted by many. This spawned many articles later on which fed into the cult of the rockstar programmer. I don’t think he had a desire to do this, but it’s interesting to see which ideas have proliferated and how the modest tones are lost.

                                      People also fail to look at Joel’s environment and culture at Fog Creek. This is not the universal environment or situation where programmers will exist. Some will be working in academia, others may be running a rivet company (a friend of mine.) The Fog Creek approach can’t just be applied in whole to these situations. There is now a much broader range of material on managing programmers, but it was relatively limited back in the early 2000s, especially if you could exist mostly in a technical bubble. There were some great books on managing creative people (think: design and advertising) that applied to programmers in a lot of ways, but these were easy to ignore. Programmers had no exposure to interview training. Now there is much more discourse on various options from hiring through to deployment of software.

                            1. 22

                              I don’t get it.

                              The very reason you cite for claiming that you’re no longer in Kansas (*nix) is in fact the very embodiment of the UNIX philosophy.

                              Said philosophy dictates that by virtue of a set of standard interfaces (namely - everything is a stream of bytes and variations on that theme) tools become pluggable interchangable modules that all interoperate like a dream because they all conform to standard interfaces.

                              fish works great as a *nix shell because it acts like… A shell :)

                              This is the power of the UNIX philosophy made manifest. Don’t question whether you’re conforming to some kind of hide bound idea of what IS *nix based on a default set of stock utilities, revel in the fact that you’re leveraging the great work of others to embrace and extend the utility of your environment with this awesome paradigm.

                              1. 7

                                You hit the nail in the head. Unix shells hit the sweetspot of practicality, human readability and computer readability.

                                If we walk back in time, the unix tools we all know, didn’t suddently sprang to existence all at the same time, but rather were born throughtout the years. curl is much newer than grep. And jq is much newer than grep. And even all these tools, while maitaining backwards compatibility have throughtout history incorporated new functionality that is today seen as part of their core value offer. For example ‘grep -P’.

                                ripgrep, fd et all are great and excel in some usages. But they don’t render their ancestor relatives obsolete in any way. I have been using ag_silversearched for almost a decade and love it, but I can’t imagine doing away without grep. Grep is still the golden standard and still does a ton of things that would only become obsolete if say, ag or rg implement it, at which point they become grep, which was never their point to begin with.

                                Enjoy the old reliable gems that always worked. Enjoy the new stuff that adds value with newer technologies. They are both great, it’s not a competition.

                                1. 4

                                  which was never their point to begin with

                                  That’s not totally true. When I set out to build ripgrep, I specifically wanted it to at least be a better grep than ag is. ag is not a particularly good grep. It has a lot of flag differences, does multiline search by default (except when searching stdin) and does smart case search by default. It also has sub-optimal handling of stream searching when compared to either grep or ripgrep. ripgrep doesn’t do multiline search by default, it doesn’t use smart case by default and generally tries to use the same flags as grep when possible.

                                  Overall, there’s a lot more in common between grep and ripgrep than there are differences, and that was most definitely intentional. For example, if you want to make ripgrep recursively search the same contents as grep, then it’s as easy as rg -uuu pattern ./.

                                  Probably the biggest things that grep has that ripgrep doesn’t are BREs and more tailored locale support (but ripgrep is Unicode aware by default, and lacks GNU grep’s performance pitfalls in that area). Other than that, ripgrep pretty much has everything that grep has, and then some, including things that a POSIX compatible grep literally cannot have (such as UTF-16 support).

                                  1. 3

                                    When I set out to build ripgrep, I specifically wanted it to at least be a better grep than ag is. ag is not a particularly good grep.

                                    We are drifting very far off-topic, but just to be clear, this is a huge part of why I use ripgrep over ag or ack, and why this post even got written: ripgrep is often, but not always, a drop-in, so it’s very tempting to just swap it in for grep. And that works…provided that, at a bare minimum, the other party has rg installed.

                                    (And you’ve done an absolutely phenomenal job with ripgrep, to be clear. I skipped past every single grep replacement until rg showed up. Thank you for putting so much time and thought into it.)

                                    1. 2

                                      I skipped past every single grep replacement until rg showed up

                                      Me too. :P

                                    2. 3

                                      Getting a reply by the man himself. How cool is that? I’ll take the chance to leave a an honest Thank You for your work. XSV is a beast and a life saver.

                                      But on to the topic… perhaps ripgrep is is the example that comes closest to the original. But for example ag is clearly opinionated, it even used to say in their webpage that it was more oriented to search code (or was it ack that used to say that maybe…) It will search on the current folder by default, it defaults to a fancier output with color, it will ignore for example .git folder and so on. I believe that the point I am trying to make is: such a corner stone like grep plays such an important role that the only way to replace it is if you create a compatible implementation, at which point it becomes grep. I want to be able to use basic regular expressions when I want and extended or PCRE when I want. To search inside .git folder without looking up new flags on the manpage. I want to pull my old tricks with the flags that everyone knows whenever I need.

                                      But credit to where is due. ack ag and rg did succeed to pass the “why would I use this instead of grep?” test.

                                      For example, if you want to make ripgrep recursively search the same contents as grep, then it’s as easy as rg -uuu pattern ./.

                                      Out of curiosity, why didn’t you for compatibility with say: grep -rn patern .

                                      1. 2

                                        Yeah sure, you’re definitely right. That’s kind of what I meant by “not totally true.” A little weasely of me. ripgrep’s defaults are definitely tuned toward code searching in git repositories specifically. (By default, ripgrep respects your .gitignore, ignores hidden files and ignores binary files. That’s what -uuu does: turns off gitignore filtering, turns off hidden filtering and turns off binary filtering.) The main thing I wanted to convey in my previous comment is that when I originally designed ripgrep, I put careful thought and attention to making ripgrep a good grep itself, to the extent possible without compromising on the defaults that catapulted ack and ag to success. There are a surprising number of subtle details involved in that!

                                        Just to add a couple of clarifications (that you might already know):

                                        • ripgrep’s default regex flavor is pretty close to grep’s “extended” flavor. ripgrep has broader support for Unicode things while EREs have some locale specific syntax.
                                        • ripgrep also has a -P flag that, like grep, lets you use PCRE2.
                                        • With ripgrep, if you want to search in .git then doing it explicitly will work: rg foo .git. Otherwise, yeah, you’d want rg -uuu foo if you just wanted ripgrep to search the same things as grep. ag doesn’t have this convenience. There is no real way to get ag to search like grep would. (You can get close by using several flags.)

                                        Out of curiosity, why didn’t you for compatibility with say: grep -rn patern .

                                        Because I perceived “recursive search of the current directory by default” as one of the keys to the success of ack and ag. (In addition to two other things: nicer output formatting and smart filtering.)

                                        Basically, I tried to straddle both lines. You can tack an rg something on to the end of a shell pipeline and it should work just as well as grep does. Case in point, ag just messes up even in really simple cases:

                                        [andrew@krusty ~]$ echo foo | grep -n foo
                                        1:foo
                                        [andrew@krusty ~]$ echo foo | rg -n foo
                                        1:foo
                                        [andrew@krusty ~]$ echo foo | ag -n foo
                                        foo
                                        

                                        Anyone who runs into that is going to be like, “okay, well, I guess I can’t use ag in shell pipelines.”

                                        ripgrep doesn’t have flag for flag compatibility like ag does, but it at least should get all the common stuff right.

                                1. 3

                                  I thought git mv was more explicit while renaming. I guess the automatic nature of log --follow and diff -M are helpful when you forget to mark a move. I do mark moves explicitly in mercurial either when doing it or after the fact with hg rename -A.

                                  1. 4

                                    Which makes sense, incidentally, because Mercurial does track renames (and copies!) explicitly in the manifest. Git is the only SCM I’m aware of that deliberately takes an explicit design stance against tracking renames. (Lots of others, e.g. CVS, don’t track renames, but they don’t call that out as a feature.)

                                  1. 10

                                    This could just as easily be titled: “My customized, non-standard environment”. Yeah, the OP has installed a bunch of weird tools, but that doesn’t mean that the *nix environment is not pretty much the same as it has been for the past 20 years, it just means that the author has installed a bunch of non-standard tools and likes using them. It doesn’t seem to say much for the state of the *nix ecosystem in general, except maybe there are a lot more specialized tools you can use these days.

                                    1. 11

                                      Hmm. I hear what you’re saying, but it’s a bit more nuanced than that. For the last 25+ years I’ve been doing development, I’m used to seeing variations in e.g. awk v. gawk or bash v. sh v. dash or the like. I think writing that all off as a “customized, non-standard environment” generically is a bit strong, yet the idea of shell scripts localized to Linux v. macOS v. SunOS or the like is pretty normal—and we generally have tools to deal with it, because the differences are, generally, either subtle, or trivial to work around.

                                      What I’m observing now, and what I’m saying I think I’m part of the “problem” with, is a general movement away from the traditional tools entirely. It’s not awk v. gawk; it’s awk v. perl, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.

                                      I’m hearing in your comment that I may not have conveyed that terribly well, so I’ll think about how to clarify it.

                                      1. 4

                                        I came here to post the same comment as @mattrose, and then read your response which clarified your point pretty well. I think I can summarize the point by saying “general purpose scripting languages are winning the ad hoc ‘tools market on Unix,’ in part due to the rise of flavor specific additions to POSIX, limiting compatibility and creating more specialized, non-portable ‘*nix’ shell scripts. This fact is more easily corrected by using one of the many flavors of a higher level scripting language that augments the Unix API with it’s own standard APIs” – Or something to that affect.

                                        1. 4

                                          Thank you for sharing your experiences. You conveyed your point beautifully.

                                          Observer bias is something that constantly comes to mind when I compare my experiences with others.

                                          Our environment doesn’t help any more so to lessen that bias either.

                                          I don’t doubt what you have witnessed, just as I don’t doubt what mattrose has experienced.

                                          For what it’s worth, from what I have seen, I cannot say that I have seen anything that validates either of y’alls experiences - but that’s because I have my head stuck in a completely different world ;)

                                          1. 4

                                            I think this is part of the cambric explosion of (open source) software. Twenty years ago, for any particular library there might be one or two well-maintained alternatives. OpenSSL and cURL, for example, achieved their central positions because they were the only reasonable option at the time.

                                            I think that even then, there was (relatively) more variety in shell tooling because these tools have a far larger influence on many people’s user experience.

                                            Compared to twenty years ago, the number of open source developers has grown by a lot. I’ve no idea how much, but I wouldn’t be surprised if it turned out to be a hundred or a thousandfold. It’s almost unthinkable today that there would be only one implementation of anything. And the variety of command-line tools has exploded even more.

                                            1. 3

                                              I think you’re right in that there is an explosion of tooling in the *nix world in the past 10 or so years, it seems every day there’s a new command line tool that does a traditional thing in a new and non-traditional way, but…

                                              I think that the people that are writing software for *nix, and even the ones that are writing the new tools that you are so fond of, realize that there is a baseline *nix … platform for lack of a better word, try very hard (and trust me, it’s not easy) to keep to that baseline API, and only depend on tools that they know are widely distributed, or can be bootstrapped up from those tools, via package management systems like apt, or macos homebrew or FreeBSD pkg tools. I would never write software trusting that the user has fish already installed on their machine, but I would trust that there is stuff like a Bourne shell (or something compatible), and grep, and even awk (It’s so small it even fits in busybox).

                                              Personally, I think this explosion of tools is actually a good thing. I think it has upped user productivity and happiness to a great extent, because you can create your own environment that fits the way you do things. Don’t like vi, or emacs? Install vscode. Don’t like bash, use fish, or zsh, or even MS powershell. I write a lot of little tools in ruby, because I like the syntax, which means I end up writing a lot more scripts than I did back in the days when I was forced into using bash, or (euch) perl and I end up having a much nicer environment to work in.

                                              The original reason I read your post is that I am worried about a fragmentation of the *nix API, but at a more basic level. For example, for many years, the way to configure ip addresses was ifconfig. There were a few shell script wrappers around it, but the base command was always available. These days, on FreeBSD, you still use ifconfig, but on some newer Linuxes, it’s not even installed anymore. And everyone does dynamic network configuration using drastically different tools. MacOS moving away from the GNU utilities more and more, even when it doesn’t make sense (I just installed Catalina and I’m still trying to get used to zsh) is another example. And let’s not even get into the whole systemd thing. (FTR, I approve, but it bugs me that it’s so linux specific)

                                              Differences like these are troubling, and remind me of the bad old days when you had BSD, and Linux, and Solaris, and IRIX, and HP-UX, and AIX, and and and and. And every one of them had a different toolkit, and utilities.

                                              Interestingly enough, all of these other variants faded away due to being tied to propietary hardware (except, kinda, Solaris), but there doesn’t seem to be anything stopping this from happening again, and I do see similar things happening again.

                                              1. 2

                                                ifconfig disappearance has nothing with dynamic configuration. ifconfig has disappeared because its maintainers never adapted it to support new features of the network stack—not even support for multiple addresses on the same NIC. Someone could step up and do it, but no one did. In fact, the netlink API makes it much simpler to create a lookalike of the old Linux ifconfig or FreeBSD ifconfig, if someone feels like it. It would be no harder to create UI-compatible replacements for route, vconfig, brctl etc. There’s just hardly a reason to.

                                                The problem with making such a tool is that there’s a lot to do if one is to make it as functional as iproute2. I have most of it compiled in a handy format in one place. I can’t see how an ifconfig lookalike could be meaningfully extended to handle VRF—you’d have to have “vrfctl” plus new route options.

                                                The dynamic configuration tools call iproute2 in some form, usually. It has machine-readable output, even though the format could be better. Few are talking netlink directly.

                                              2. 1

                                                It’s not awk v. gawk; it’s awk v. perl, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.

                                                Guess I don’t buy this premise or axiom. I write a lot of bourne shell, note, not bash, that almost always runs on any unix. Its not particularly hard, but a lot of linux developers just don’t seem to care or even try. Your perl example is good though, cause I’ve rewrote a lot of that early 2000 nonsense back into plain olde shell when i find it and made it smaller in the process.

                                                And observing this where exactly? Are you sure you’re not just in a bubble and self reinforcing? I’ve found just teaching people how to write shellscripts with say shellcheck and the regular tools tends to get them to realize all these fancy new tools might be great but for stuff that should last sticking with builtin tools isn’t that hard and means less effort overall in later maintenance.

                                            1. 3

                                              I’ve been watching this project for a while and I’m excited to play with it now that it’s at 1.0! But I’m a bit confused at the HTTP server example they have in here:

                                              import { serve } from "https://deno.land/std@0.50.0/http/server.ts";
                                              for await (const req of serve({ port: 8000 })) {
                                                req.respond({ body: "Hello World\n" });
                                              }
                                              

                                              Won’t this only ever handle one request at a time, or am I misunderstanding how async iterators and for await...of work? For a hello-world this doesn’t matter, but imagine instead

                                              for await (const req of serve({ port: 8000 })) {
                                                const body = await fetch('https://a-slow-api.com/v0/some_slow_resource').then(res => res.json());
                                                req.respond({ body });
                                              }
                                              

                                              Wouldn’t the server be unable to process any new requests coming in while awaiting the slow fetch, because a new loop iteration wouldn’t start until the previous loop body completes? Am I missing something here?

                                              1. 4

                                                fetch returns a promise. You’d chain that promise to req.respond, you wouldn’t await on it.

                                                1. 1

                                                  Whoops, you’re completely right. It’s only a problem if you directly await in the loop body, so you can just… not do that. Either by regular old promise chaining as you suggest, or making a non-awaited call to an async function. I think something about the top-level async/await threw me off :facepalm:

                                                  1. 1

                                                    Even in the body of the loop, you’d be fine; await is just syntactic sugar for .then(...). A non-awaited call to an async function will return a raw Promise immediately; awaiting such a function will return the Promise, but chain any actions you’re doing to an implicit .then(...). So the original example is roughly equivalent to

                                                    for (const reqPromise of serve({port: 8000}) {
                                                      reqPromise.then(req => req.respond({body: "hello, world"});
                                                    }
                                                    

                                                    which makes it clearer that multiple requests can be served at once. And your example becomes

                                                    for (const reqPromise of serve({port: 8000}) {
                                                      reqPromise.then(req => {
                                                        fetch('https://a-slow-api.com/v0/some_slow_resource')
                                                          .then(res => res.json())
                                                          .then(body => req.respond({ body });
                                                      }
                                                    }
                                                    

                                                    which again hopefully makes it clear that lots of fetch calls can be running simultaneously.

                                                    1. 1

                                                      I think if you do an actual await in the body of the loop (not nested in another async) it will block the next iteration (thus preventing handling multiple requests). The .then version won’t.

                                              1. 7

                                                When I paint a mental image of these big-tech-interviews, I imagine a monkey jumping through hoops.

                                                It still has to be proven that implementing algorithms quickly and explaining how a 3-way-handshake works is relevantly correlated with the position at hand. I’m sad to see that the computer-science-interview process has more and more adapted to this mode rather than check if a candidate as a person brings in the right philosophy.

                                                Indeed, there needs to be qualification at hand, and it would be possible to check this in a small subsection of an interview, however, making it almost the sole aspect is worrying. When these people rise up higher in the hierarchy of these companies, other skills are more relevant (soft skills, emotional intelligence, understanding office politics).

                                                When these big-tech-companies don’t make selections based on that we shouldn’t be surprised when we end up with managers who lack said skills, are unable to make good business decision and might even be cold-hearted sociopaths.

                                                1. 14

                                                  I haven’t seen it, but I know (several of my colleagues were there when it happened) that they did an internal study at a former workplace, some time before I’d joined, as part of a wider effort to (potentially) revamp the interviewing process after a large reorg. The findings were pretty much unsurprising. It turned out, first of all, that there wasn’t much correlation between the performance in the algorithm-heavy test and post-hiring activity. Worse, though, it turned out that performance in the interview-heavy test wasn’t a good predictor for the hire/no hire feedback, either. People who did very poorly usually got a no hire, but once you got past the “doesn’t know what a linked list is” level, lots of people did great, or at least okay, and got a no hire feedback, and lots of people did poorly but got a “hire” feedback.

                                                  Eventually, the whole mechanism remained in place (!!), for two reasons.

                                                  First, no one had an acceptable suggestion for how to go about evaluating fresh graduates (for various reasons, tests that you could take home with your weren’t considered a good idea).

                                                  Second, while virtually all programmers agreed that the tests were useless, virtually all hiring managers wanted them to stay. Realistically, if you cut through the standard corporate doublespeak, they wanted it to stay for for two reasons. The most important one was that the test and the hire/no hire feedback gave them a rock-solid paper trail for every decision – no matter what happened, they provided the perfect cover. If job performance was terrible, then:

                                                  • Terrible score, good feedback? I trust my team to make the right decision, mistakes come with the territory of that, and numbers never paint the full picture of a person anyway.
                                                  • Good score, bad feedback? They did good on the interview, we had our doubts but we have to stay metrics-driven in our decisions.
                                                  • Good score, good feedback? They looked great in the interview.

                                                  Bad score and bad feedback obviously didn’t get you hired, and hiring people who did great on the job was obviously considered a success so nobody bothered to examine how that happened.

                                                  The other reason, which I have heard on more than one occasion (and not just there), is that, I quote, “people know that you have to learn these things if you want to get your foot in the door in our industry and we want to hire people who are willing to do that kind of work”.

                                                  1. 2

                                                    Even within Google there’s been recognition (over the past few years) that these whiteboard algorithm interviews are not very predictive of future job performance. We’ve been experimenting with a few alternate approaches, including (for new grads only) an evidence-based path to hiring: even if you don’t seem to be good at algorithms-on-whiteboard during interviews (but can at least write decent code on a laptop & display evidence of being able to learn new concepts during an interview) you can get a 1-year contract to actually work on upto 2 different teams. After that it’s much easier to base a hiring decision on your actual work.

                                                    1. 1

                                                      Did we work at the same company?

                                                      1. 2

                                                        I don’t know, but the stuff above describes almost every large company I’ve seen – so I guess in a way we did :-).

                                                    2. 8

                                                      Apple’s the only FAANG I’ve never been in the pipeline for, so I can’t comment on them. Facebook and Google both seem to be exactly what the stereotypes say: endless laborious algorithm-challenge stuff.

                                                      Netflix, though, I don’t know if it was just the specific team or not, but their process was really quick. I think something like ten days total from first phone call to onsite. And all the technical sessions involved realistic things the team would actually be doing, and seemed to evaluate on things that would actually matter on-the-job.

                                                      1. 3

                                                        Netflix generally does things differently to the big tech companies I believe. It doesn’t surprise me to hear that their hiring process is well thought out too.

                                                        1. 1

                                                          I feel like Netflix shouldn’t be compared to the other 4 companies in the list. It’s a lot smaller than them. Thinking of what the acronym would be without the N does explain why people feel the need to include it though.

                                                        2. 7

                                                          When I used to interview engineers for FB, the most obvious thing I could tell was when someone had found our interview questions and rehearsed the answers. Beyond that, they weren’t that useful.

                                                          1. 1

                                                            How did you judge people who you suspected had rehearsed? Did you see them as cheaters, capable, etc.?

                                                            1. 4

                                                              Where they had clearly regurgitated a memorised answer to the interview question, I noted that I didn’t have any information on their ability to solve novel programming problems.

                                                          2. 4

                                                            I feel the same way, even though I think for an SRE the question regarding the TCP three-way-handshake is relevant.

                                                            However, I also think that the interview process of a big tech company would for the goals of the process and the company not at all benefit from philosophy. “Monkeys jumping through hoops” fits a lot better. Unless you created some widely used programming language, kernel, or are otherwise very distinct in first place in big companies by nature you are a tiny gear and as such an interview process tries to find out whether you can be a tiny gear.

                                                            Also when you are a big (as in many employees) company what’s important is to have people that quickly can, come, go, be replaced, if needed, simply because it happens more often. While inside a team philosophical alignment probably makes sense, especially in smaller teams where you get to know all the people, to work efficiently, which I guess is why the “eating together” happened I don’t think it is really relevant to have this on a company-wide level.

                                                            At the end of the day as a regular SRE at a big company you want to know TCP, BGP in and out to troubleshoot problems that occur. I assume the parameters would be something like “doesn’t require long at on-boarding”, “has experience with something that looks like what we have”, “will try to do his best to do what this position requires”, so overall is cost-effective.

                                                            I also assume that consistent quality work is more important for that position, than for example having people excel , because they perfectly fit also on a personal level At least to me the generic introduction by the recruiters sounds like a “one of many SRE engineers” position, potentially with part of the pay is being able to list Google in the CV.

                                                            What I want to say is that this process might work very well for Google, because of its size, company structure, form and goals. And given that you sound like you would not want to go through such a process you might not be part of their target audience.

                                                            Maybe it’s possible to compare this with developing and using Kubernetes or a programming language like Go (just random examples) is not the right decision for everyone, no even somewhat similar companies, when it might be for Google.

                                                          1. 4

                                                            So I am going insane. I swear that the original version of POV-Ray or its predecessor DKBTrace used “hither” and “yon” for what is now called “camera” and “look_at”. I remember reading that back in the early 90’s and thinking those terms were hilarious and well-chosen and that usage cemented my usage of those words in similar not-so-serious contexts.

                                                            (This is distinct from the concept of “hither and yon clipping.”)

                                                            I cannot find any evidence that this was ever true. I’ve gone to Aminet and downloaded the 1991 version of DKBTrace, versions of POV-Ray from 1.0 up to the early 2000’s…no mention of those words whatsoever. I’ve searched old Usenet postings.

                                                            Someone help me out here, did I dream it?

                                                            (I also remember some public acrimony in the early days of POV-Ray where there was some sort of trademark dispute or something and some new challenger appeared on the Amiga text-driven 3D raytracing scene based on older versions of POV-Ray or something…my memory is fuzzy, since this is like 25 years ago…)

                                                            1. 5

                                                              You sure you’re not thinking of polyray, not POV-Ray? It’s been forever, but I think that one did indeed use hither and yon, the syntaxes were very similar, and so are the names.

                                                              Edit: Bingo. And note that the author of Polyray contributed to POV-Ray, so there may even have been a very early version that did use both. http://paulbourke.net/dataformats/polyray/

                                                              1. 2

                                                                Thank you! It’s possible. Polyray was never released on the Amiga, looks like, so it would have to be some old pre-1.0 version of POV-Ray that was on the Amiga but had Polyray keywords. So far this is looking like the best option, I think.

                                                              2. 3

                                                                Spooky. Google shows that the book Physically Based Rendering: From Theory to Implementation from 2010 used these terms. Did you read that book?

                                                                1. 2

                                                                  Thank you for looking! “Hither” and “yon” are standard-ish terms in 3D graphics for defining clipping planes and camera position and stuff.

                                                                  But I remember POV-Ray supporting these terms as keywords specifically. I remember the documentation saying something like:

                                                                  set the camera position using camera<0,0,0> and where it's looking with look_at<0,0,0>.
                                                                  You can use the older hither<0,0,0> and yon<0,0,0> keywords if you'd like
                                                                  

                                                                  Something like that. They were supported as keywords in the POV-Ray language. I’m 99.9% sure.

                                                                  1. 2

                                                                    You should be looking on povray repository

                                                                    1. 1

                                                                      Thank you, but I think that’s not it I don’t think. Those appear to be talking about hither/yon clipping, which is something else and it doesn’t look like they were ever keywords in the language, just names of functions. It’s also the wrong year, from 2003 at the earliest (when OpenEXR was announced).

                                                                      So thank you, but I don’t think that’s it, unfortunately.

                                                                2. 2

                                                                  Mandela effect is a pain. My little two cents in the topic: back in the late 90s I spent some time playing around with POV-Ray and other open source raytracers and I don’t recall seeing those keywords used in any of the scene scripts I read.

                                                                  1. 2

                                                                    I swear that the original version of POV-Ray or its predecessor DKBTrace used “hither” and “yon” for what is now called “camera” and “look_at”.

                                                                    Could it be that you wrote a ray tracer by hand and you chose the identifier names hither and yon in your own ray tracer?

                                                                    1. 1

                                                                      Hah, that would be awesome but sadly no. I never wrote a ray tracer.

                                                                      (I am going to eventually do the Ray Tracer Challenge in my copious free time, though…)

                                                                      1. 2

                                                                        Try one! A very basic one is really rather easy, and it’s very satisfying to get some real images out of a handful of math.

                                                                    2. 2

                                                                      Admittedly, it was a long time ago, but I don’t recall this being documented in version 2.0, the first one I’ve used. I just tried it under Dosbox and I can’t get POV 1.0, the oldest version still available on the website, to play ball with me if I use hither, nor do any of the binaries contain this string (albeit this doesn’t mean much). Maybe this was supported in some version, but not documented?

                                                                      This is extraordinarily spooky because, now that you mention it, I could swear I remember it as well!

                                                                    1. 1

                                                                      I’d honestly entirely forgotten about Ogg. Is it still a thing in a meaningful sense, what with the MP3 patents having expired?

                                                                      1. 6

                                                                        Ogg is a container format, able to hold different codecs inside, Vorbis is the codec designed to replace MP3. Now Ogg is being used with the Opus codec, with quite success.

                                                                        1. 3

                                                                          Ugh, I thought that Opus had its own container format/was using MKV already …

                                                                          Tried to parse Ogg once, wouldn’t recommend.

                                                                        2. 6

                                                                          Spotify uses ogg vorbis for streaming to their apps: https://en.wikipedia.org/wiki/Spotify#Technical_information

                                                                          1. 5

                                                                            Ogg vorbis file size pretty regularly beat out vbr mp3’s at the max setting I could distinguish in a blind listening test. If a lossless source is available I always prefer encoding vorbis myself for use on my own (non internet) music player! The criticisms of the Ogg container make sense though. I’ve never really seen Vorbis in any other container tbh.

                                                                            1. 3

                                                                              Old-style WebM files used Vorbis in a Matroska container.

                                                                        1. 5

                                                                          Is it just me, or is “cause of death” largely based on speculation (and often wrong)? (See text for ML and SmallTalk).

                                                                          I’d also argue that Scala should also be on this list, and Pascal shouldn’t – there is shit-ton of Pascal/Delphi/FreePascal code out there that is actively maintained and developed.

                                                                          1. 14

                                                                            Is it just me, or is “cause of death” largely based on speculation (and often wrong)? (See text for ML and SmallTalk).

                                                                            The ML I’m shakier on, but I’m pretty confident about SmallTalk. My arguments there are based on interviews with several contemporaries and reading a lot of primary texts. I can’t overstate just how much Java dominated the OO world. If you read OOP books from before 1995 they’re in a mix of Smalltalk, C++, Eiffel, and Ada. Past 1995 it’s all Java.

                                                                            1. 7

                                                                              Something that I think might have done Smalltalk and APL in is the image/workspace model not playing well with other tools. If you wanted to write Smalltalk code, most environments required you to live entirely in Smalltalk. There were amazing version control systems, and editors, and what-have-you in Smalltalk…but if you wanted to use your own VCS, or text editor, or whatever…well, sometimes it could be done and sometimes it couldn’t.

                                                                              1. 2

                                                                                Supporting your point, I’ve seen people in past few years say that about Squeak. They didn’t want to ditch existing environments.

                                                                                1. 3

                                                                                  Yep. And a big focus of Pharo (a Squeak fork) has been on better integrating with existing tools, such as Git, GitHub, external editors, and so on. It’s a work-in-progress, but it’s been fun to watch.

                                                                                  1. 2

                                                                                    Do you happen to if they’ve landed better support for external editors in 8.0, or where I’d be able to read up on that?

                                                                                    Last time, the only think that kept me from spending a lot more time trying out Pharo was that I was missing Vim keybindings, being able to edit from an external editor would be really nice.

                                                                              2. 6

                                                                                I think the cause of SmallTalk’s demise was that IBM pretty much took its SmallTalk devs with their SmallTalk IDE and retargeted it to Java.

                                                                                So it’s not that Java just overtook SmallTalk, it’s that the most experienced SmallTalk devs were paid to not write SmallTalk and their IDE was turned into a Java IDE within a short time-frame, and the remaining ecosystem never recovered.

                                                                                1. 4

                                                                                  It’s “Smalltalk”, not “SmallTalk”. (i know it’s a trivial difference, but this has always bugged me…)

                                                                            1. 7

                                                                              Start your day with a bike ride. Take your dog for a walk. Go to the bakery and get a coffee for breakfast

                                                                              Ahem… that’s exactly what you’re not supposed to do. Everybody has been daying literally just one thing: stay at home and don’t get out unless you have to (for example: grocery store). Exceptions can be made for those whose job hasn’t been suspended and cannot be performed from home.

                                                                              I know it’s dreadful. However… This is the situation, there’s no going around it.

                                                                              Right now we all must just suck it up until this issue is solved.

                                                                              1. 27

                                                                                working from home doesn’t mean you can’t get out of the house at all, just avoid social contacts. Bike rides or dog walking should be totally safe?

                                                                                1. 4

                                                                                  working from home doesn’t mean you can’t get out of the house at all, just avoid social contacts. Bike rides or dog walking should be totally safe?

                                                                                  working from home doesn’t mean you can’t get out of home at all.

                                                                                  working from home during a coronavirus pandemic does mean you shouldn’t get out of home at all.

                                                                                  The thing is, if everybody goes “the bakery and get coffee for breakfast” then we’re back at the starting point. Don’t focus on the potentially safe part. The author is specifically advising people to get out and go into a closed, public space. I can’t understand why you’re ignoring that part of the advise.

                                                                                  1. 20

                                                                                    You’re both right. The bakery part of the advice was probably a bad idea, but the bike-riding and dog-walking part was probably fine. I don’t think there’s any evidence, is there, that the virus can hang suspended in the air even once an infected person has left? It seems that if you stay a meter away from anyone else (and wash your hands when you get home) then you’re not going to get the virus simply from going outside.

                                                                                    (If this is wrong then I would love to know that, though!)

                                                                                    1. 3

                                                                                      From what I gather, the virus can spread through air alone, but taking a walk without breathing in any air that other humans nearby exhale should be fine.

                                                                                      1. 5

                                                                                        The most recent data I’ve seen still supports that it’s sufficiently heavy to fall to the ground outside a meter. That’s why the quarantine distances are generally 2m or more.

                                                                                        1. 3

                                                                                          My understanding was that it mostly traveled in water droplets.

                                                                                      2. 2

                                                                                        (Replying to myself because it’s too late to edit.)

                                                                                        Here is a preprint of an article whose abstract says, “We found that viable virus could be detected in aerosols up to 3 hours post aerosolization.” It seems like the researchers only measured for three hours, so it’s not like the virus was gone at that point, although they observed that the concentration decreased exponentially during that time. This study has not been peer reviewed and I am not a doctor; I don’t know what implications this has for real-world transmission of the disease.

                                                                                        1. 1

                                                                                          I don’t think there’s any evidence, is there, that the virus can hang suspended in the air even once an infected person has left?

                                                                                          I don’t think there is, but I’m kind of curious what it’s like if you rode in a peloton. I was looking at some research on protective equipment someone had shared on reddit related to the regular social distancing recommendations, and it was interesting to see just how far we can spray germs.

                                                                                          More seriously, I think running or riding on your own is fine as long as you don’t get yourself into an incident that requires medical care. I normally start riding outside in April, but the Philadelphia weather is mild enough already. I’m probably going to stick to jogging, or easy cycling just in case I have some stupid bike accident. I’m generally cautious, but the road cycling group I ride with push it (great for fitness!), but a year never passes without a few run-ins for others in the group.

                                                                                          1. 1

                                                                                            That’s a good point. This is a terrible time to take unnecessary risks: if you hurt yourself, you face the dual possibilities that you won’t be able to get medical treatment (because the facilities are full of people with covid-19) or that you will be able to get medical treatment but that you’ll be preventing someone else from being treated for the virus. For myself, I’m planning on staying inside except as demanded by my dog.

                                                                                        2. 5

                                                                                          I agree, going to a coffee shop might not be the best advice. I was reacting to the idea you should literally not get outside the house at all, which I find ludicrous (and bad for mental health, too — not sustainable).

                                                                                          1. 1

                                                                                            That is literally what happens in Italy atm and it will probably hit more countries very soon

                                                                                        3. 1

                                                                                          Depends. If you’re staying home because you feel contagious it’s best to keep at it.

                                                                                        4. 7

                                                                                          This straight up isn’t helpful advice.

                                                                                          We can practice social distancing without quarrantining ourselves. Going to take a bike ride, walk your dog, or get a coffee (short of the risk you take interacting with the barrista!) doesn’t increase anyone’s risk of infection.

                                                                                          People have to maintain their mental health too, you know. That’s what this part of the article is about.

                                                                                          1. 5

                                                                                            or get a coffee (short of the risk you take interacting with the barrista!)

                                                                                            The risk you take interacting the barrista and everyone else in the cafe!

                                                                                            1. 6

                                                                                              The cafes around here are no longer full. Most people are getting drinks and leaving. They aren’t making a tightly packed queue. Those staying inside are 2-3m from each other.

                                                                                              Does this seem reasonable, based on the advice going around? I’m not sure but if it’s enough to keep people safe and also allow businesses and some normal to continue operating.

                                                                                              1. 5

                                                                                                That’s totally reasonable. Also, there’s a high chance that infection will happen at some point, the question is when. Social distancing is a delay technique not a contingency technique.

                                                                                                1. 1

                                                                                                  That’s exactly where I’m at.

                                                                                                  There are no lines. There is at most one other person waiting, and they stand a distance away. The barrista is wearing gloves and a face mask.

                                                                                                  We all need to be reasonable here, for whatever values of reasonable make sense to you.

                                                                                                  Obv if you’re high risk you need to simply stay home and if you go out at all be HIGHLY risk averse.

                                                                                          1. 2

                                                                                            Mostly, this is just fun, but I rather enjoyed this quote:

                                                                                            “It doesn’t matter what the code is supposed to do,” the old man said. “Code doesn’t do what it’s supposed to do. It only does what it does.”

                                                                                            1. 4

                                                                                              That’s our fellow lobster @hyfen, pinged him in case anyone has questions! He is still actively working on this epic project.

                                                                                              1. 2

                                                                                                Is the source anywhere? Or if it’s closed, does he want testers at all?

                                                                                                1. 1

                                                                                                  At least there is this page where you can sign up for email updates or even email him: https://hyfen.net/memex/

                                                                                                1. 3

                                                                                                  Yes, with exception that instead of “version branches” create branch for release, prepare it there, tag release, then merge it to the master and call it a day, no develop/master split and having each commit on master tagged.

                                                                                                  1. 6

                                                                                                    Using tags only for web-only/evergreen software can work, but branched versions are often a must-have for desktop work. If a security issue is discovered with version 1.3.0 through 1.6.7, you may need to release 1.3.1, 1.4.6, and 1.5.3 alongside 1.6.8. In that world, it’s best to have 1.3, 1.4, 1.5, and 1.6 branches, where e.g. 1.4.5 is a tag on the 1.4 branch. In this scenario, you’d want to fix the bug in the 1.3 branch and cut a tag, then merge that into the 1.4 branch and cut a tag, then merge into the 1.5 branch and cut a tag, etc. That workflow’s not doable using just tags.

                                                                                                    Again, for evergreen/web-only software, I’m with you, but version branches absolutely have their place.

                                                                                                    1. 3

                                                                                                      But you know that you can create commit on detached head? You do not need branch for that? And even then you can create temporary branch for that particular fix, ex.:

                                                                                                      git checkout -b hotfix/space-overheating v1.3.0
                                                                                                      

                                                                                                      And after that you can create new tag 1.3.1 that will not be reachable from master (as you would do anyway) and you can remove temporary branch.

                                                                                                      1. 1

                                                                                                        I would not say “must-have”, I believe google chrome is doing just fine with a continuous release model, even though it is “desktop work”: https://medium.com/@aboodman/in-march-2011-i-drafted-an-article-explaining-how-the-team-responsible-for-google-chrome-ships-c479ba623a1b

                                                                                                        1. 4

                                                                                                          I’m aware of that, which is why I also very clearly, twice, called out evergreen software as also not needing branched versions.

                                                                                                          1. 1

                                                                                                            I wasn’t aware of the term “evergreen software” is there a definition around somewhere?

                                                                                                            1. 1

                                                                                                              I’ve heard it used informally to refer to, for example, an API that is continously updated without a specific version number. Breaking changes are communicated by other channels, such as websites for developers.

                                                                                                          2. 2

                                                                                                            Google has the ability to say ‘this software is only supported for 6 weeks and then you must update’. 99% of other software vendors do not have that ability.

                                                                                                            1. 3

                                                                                                              Software Users have been burned by vendors too often, that is why they demand “bugfix releases”, they do not trust devs to deliver a feature upgrade without regressions.

                                                                                                              1. 2

                                                                                                                I mean it’s not like people trust Google to deliver a feature upgrade without regressions. They just don’t really get any say in it. Google’s web browser isn’t a product.

                                                                                                      2. 2

                                                                                                        I don’t understand what branch by abstraction is about. From the web site, it seems like a development strategy, rather than a way to use version control.

                                                                                                        1. 1

                                                                                                          They go hand in hand:

                                                                                                          • Don’t do long lived (>1 week) branches, especially don’t do them to build large new features.

                                                                                                          • Instead, use a well-conceived feature-flag system to turn off in-progress code that you merge to master.

                                                                                                          It’s a negative suggestion, not a complete version control strategy.

                                                                                                      1. 2

                                                                                                        I’m a little miffed that, five years or so after last trying to use CL seriously, Quicklisp is still not the de-facto package system and still does not have easy packages in every Linux distro.

                                                                                                        1. 4

                                                                                                          I’m pretty sure Quicklisp is the de-facto package system. ASDF is the build system, and is used by quicklisp. See also the question “How is Quicklisp related to ASDF?” here: https://www.quicklisp.org/beta/faq.html

                                                                                                          1. 3

                                                                                                            Perhaps what @icefox means (and what I sometimes long for) is that implementations do not ship with it in their image. This would make using packages in scripts a lot easier.

                                                                                                            1. 2

                                                                                                              Is that partly due to the fact that Quicklisp blurs the line between upgrades to Quicklisp itself, and upgrades to the collection of libraries it can install? To be clear, I used to write CL a decade ago, but I’ve not kept up; if my question doesn’t make sense, please call me out on it.

                                                                                                              1. 1

                                                                                                                I don’t know. I do not feel that line is very blurry :-)

                                                                                                        1. 7

                                                                                                          Hoping to hell my kid sleeps without night terrors for one forking night and making up whatever doesn’t work out with naps.

                                                                                                          1. 2

                                                                                                            I found it hard to find much information about this format, but somebody wrote a library, with an SVG converter.

                                                                                                            1. 4

                                                                                                              The linked article did, I thought, do a good job getting into the format. This is actually one of a few pieces of Haiku that I badly wish would leak out; it’s so well suited for icons on the web.

                                                                                                              1. 3

                                                                                                                I was hoping the article would give (or link to) a complete spec.

                                                                                                                I agree that it’s well-suited for icons on the web. Also, I wish that a format like this would replace SVG as the dominant vector graphics format. SVG files are huge, & a markup format would make sense if humans were reading & writing these by hand, but most SVG files are made in graphical editors & can’t really be understood by a human reader so there’s no point in keeping the bloat. A packed binary format that represented everything that SVG represents but in a handful of bytes would be really great. (HVIF doesn’t seem to be quite as expressive as SVG unfortunately.)

                                                                                                                1. 2

                                                                                                                  SVG is an XML schema, right? That explains why it’s so bulky. It makes some sense in a browser — a displayed SVG has its own DOM, which can be manipulated by JS — but it’s awful as a transfer format.

                                                                                                                  Apple platforms tend to use PDF for vector graphics. I know PDF started out as a wrapper around PostScript, but I think it’s a more compact binary format.

                                                                                                                  1. 3

                                                                                                                    PDF is a fairly bulky & hard to implement format, unfortunately.

                                                                                                                    Re: binary SVG – I think it’d be worthwhile to have a packed binary format that was exactly semantically equivalent to SVG, the way that BSON and msgpack are (almost-)exactly semantically equivalent to JSON. Then it can be inflated into a DOM without all the complex logic of parsing actual XML. (More achievable: have a binary format that only supports the whole of the subset of SVG supported by major vector graphics editing programs & SVG rendering libraries – and doesn’t represent the extra stuff that XML can have but will be ignored by every real implementation.) At work I’ve run into performance problems caused by sending large SVG files to users – for stuff like outlines, where even a naive binary format would shrink the file by at least two orders of magnitude. (You would get some but not all of these space savings from using conventional compression, but for the worst offender files, the window size would be too small.)

                                                                                                            1. 3

                                                                                                              If you’ve not heard of it, Heptapod is a friendly (I think?) pseudofork of GitLab to support Mercurial. I’m not sure if the long-term plan is to merge back or not, but seeing such major projects moving to it makes me think it should at least stick around for a bit.