1. 3

    Here’s what it can’t do:

    • Compile rustc using all 6 cores at once – you’ll use up all your memory far before anything useful gets done.

    You might want to look into zram/zswap, which basically allows you to fit more into memory by compressing data, at the cost of a little extra CPU usage. I don’t know how easy it is to set up on the default OS (I’m using NixOS on my Pinebook Pro, so it’s just a case of setting zramSwap.enable = true for me). Anecdotally, it makes a big difference: The cost of compressing data is not really noticeable, but the extra swap space is.

    You also mention that you fitted an M.2 SSD. Assuming a reasonably speedy drive, a decent sized swap file will help with large compiles.

    1. 2

      I’m planning to put NixOS on my PineBook Pro when it shows up. Would love to see a post on your config + experience deploying NixOS on Pine64. My initial research suggests I’ll be compiling a lot of stuff.

      1. 1

        My initial research suggests I’ll be compiling a lot of stuff.

        Mostly just the kernel I think, as there are still a few bits missing from the mainline kernel (looks like the device tree is making its way in, but has just missed 5.7 (?)). I also had to package the firmware blobs for the wifi/bt chip, but I’ll need to double check if that’s still necessary. As long as you choose a fairly recent channel (20.03 or unstable), most other packages are sufficiently up to date, so you can just get them from the main binary cache. Generally speaking, most things are working well; graphics acceleration just works, accelerated video playback with mpv just works.

        I will try to write a proper review of my experiences at some point.

    1. 9

      I can and will retrain my hand placement habits. After all, this touchbar-keyboard-trackpad combo is forcing many people to learn to place their hands in unnatural positions to accommodate these poorly designed peripherals.

      It is amazing to me what people put up with to use these devices. I generally find the issue with accidentally touching the trackpad so severe that I only use laptops with trackpoint and the first thing I do on my device is to disable the trackpad completely.

      1. 15

        Part of why I ended up becoming a programmer is frustration with a touchpad. It led me to keyboard-only UIs, which lead me to Arch/XMonad, which led me to Haskell, which confused me but led me to Python, which… <10 years later> I have a career as a software engineer :)

        1. 2

          Part of why I got into programming more passionately was excitement when the apple trackpad came out ten years ago. It led me to think about possibilities beyond keyboard-centric UIs. It led me to make zany things. While I’ve never succeeded professionally as a full-blown software engineer it made me appreciate how hard developing great experiences for humans are.

        2. 5

          At work, we actually had to modify a piece of software to deliberately ignore most of the input from recent mac touchpads. The application is multi-touch capable, which on some of the hardware it runs on is really useful. However, on mac, the combination of the oversized touchpad and the fact that it doesn’t map to the screen (it’s mapped to a smaller area which follows the cursor around, so nobody really knows what they’re touching) meant that macbook users were constantly touching things with their tentacles which they didn’t mean to touch.

          1. 1

            macbook users were constantly touching things with their tentacles which they didn’t mean to touch.

            So, uh, what exactly do you do for work?

          2. 1

            Honestly, I liked trackpoints for several years, but after getting a Thinkpad with both trackpoint and trackpad, I have firmly settled on preferring trackpads for this reason: I can accurately point at things faster than with the trackpoint. I do use the trackpoint on rare occasion, but only when I need fine control with something, like moving in a very small screen area, or scrolling only a tiny bit.

            I acknowledge that many other people around the Internet have a problem with accidental palm touches, but, for some reason, that’s never been a problem for me. Then again, I haven’t used Windows in the last several years (only Linux and OSX), so maybe that’s the reason?

            1. 2

              The Thinkpad has those two big buttons at the top of the trackpad. They require force, so you won’t accidentally press them, and they’re placed about where my thumb wants to rest when I use the keyboard.

          1. 6

            Thank you for writing this. I know that in the past I’ve struggled trying to search for solutions and ask questions about Nix because I wasn’t sure if I was using the right terms.

            A couple of other terms which appear in the documentation and occasionally elsewhere:

            • Instantiate: transform a nix expression into a derivation.
            • Realise: transform a derivation into an output (by building it).

            I highlight these because I think that understanding and using them helps get a clearer idea of the whole nix expression –instantiate-> derivation –realise-> output process.

            Another term which shows up a lot is package. However, I don’t know if there is a precise definition for this in a Nix context.

            Nix store

            This is the filesystem storage for derivation outputs

            It’s also where the derivations themselves (the .drv files) live. Not sure if that’s an important detail.

            1. 1

              I have not touched any of these concepts so far, so they’re still magic to me! But I will look into them. :-)

              (Only thing I’ve done with .drv files so far is sometimes cat them to find the actual output directory.)

              1. 2

                You can also use nix eval

                ❯ nix eval --raw nixpkgs.hello

                You can also use nix eval to evaluate other attributes:

                ❯ nix eval --json nixpkgs.ncdu.buildInputs

                Though at some point it’s nicer to switch to nix repl for inspecting stuff :).

                Anyway, thanks for the great post! This is something that can point people to in the future if they want to learn about core Nix concepts.

            1. 4

              I think the “line-based tools” argument can work both ways. If a single line encompasses multiple “code chunks”, as in u/Pistos example:

              some_object.chained_method1(arg).chained_method2(arg2, arg3).chained_method4();
              • I can’t use vim’s line-based commands to copy/delete a single chunk
              • If I replace chained_method2 with chained_method3, a line-based diff (most of them) will just show a change to the whole line.

              Whereas if it’s spread out:

                  .chained_method2(arg2, arg3)
              • line-based editing makes it easy to copy/paste/move code chunks
              • line-based diffs will just show the chunk that changed (and consequently won’t conflict if e.g. someone changed chained_method4 while I was changing chained_method2).

              Personally, I’m in u/tclu’s “short lines are easier for humans to read” camp, so do short lines except when necessary.

              I think programming languages and frameworks can have a huge impact on whether shorter lines are comfortable to write. For example, good namespacing/scope means that you don’t have to prefix every function and variable name with its scope.

              1. 2

                I’m confused. One of Fossil’s selling points is an integrated bug tracker, but scanning through the recent fossil development history, most of the links to bugs fixed refer to their forum, not their bug tracker. I was trying to see how the bug tracker integrates into the VCS, but I haven’t found a single commit which links to the bug tracker yet.

                1. 2

                  I’m not sure if I’m misunderstanding you point, but I my understanding of “integrated/built-in bug tracker” is that the tickets, forum posts, wiki are all cloned with the “VCS”, and synchronized together. So I type

                  fossil sync

                  in a local checkout, then open the web ui with

                  fossil ui

                  and can create tickets, comment on forum posts and edit wiki articles, even while I’m offline. I haven’t found out what the linking abilities are between commits and tickets (I’d have to try if the wiki syntax can be used), but that might be what you’re looking for.

                  1. 5

                    People using git and svn often go to great lengths to integrate various third-party issue trackers with their version control, so that they can:

                    • look at a commit and follow a link to the issue it fixes
                    • look at an issue and see all the commits which relate to it

                    I just assumed that given that the VCS and issue tracker come bundled together, linking issues and commits would be a “first class citizen”, and would be widely used. I was therefore surprised to not see much evidence of its use. I have now managed to find an example.

                1. 3

                  Rust again landed as the most loved language. I’ve noticed it has also climbed up into top best paying languages. Most likely due to its relatively young age it has an exceptional (years of programming experience)/salary ratio.

                  1. 3

                    climbed up into top best paying languages

                    Has it actually climbed up there from a much lower position, or do languages tend to start with a small band of well paid programmers and gradually spread out downwards to lower-paid programmers? I would guess that the companies driving adoption of new languages tend to be big companies paying relatively high salaries.

                    Most likely due to its relatively young age it has an exceptional (years of programming experience)/salary ratio.

                    I don’t think it’s quite as simple as that, as I believe (It doesn’t appear to be specified?) it’s the years of programming experience with any language which is measured, not the years experience with the language in question. A young language that had high uptake among experienced programmers would have a high experience figure.

                    I wonder if their survey is consistent enough from year to year that they could plot some trends over time.

                    1. 16

                      Unfortunately, the comparison is written in such a clearly biased way that it probably makes fossil sound worse than it is (I mean you wouldn’t need to resort to weasel words and name-calling if fossil was valid alternative whose benefits spoke for themselves .. right?). Why would anyone write like that if their aim is to actually promote fossil?

                      1. 5

                        The table at the top is distractingly polemic, but the actual body of the essay is reasonable and considers both social and technical factors.

                        My guess is that author is expecting the audience to nod along with the table before clicking through to the prose; it seems unlikely to be effective for anyone who doesn’t already believe the claims made in the table.

                        1. 4

                          This is what’s turned me off from even considering using it.

                        2. 12

                          “Sprawling, incoherent, and inefficient”

                          Not sure using a biased comparison from the tool author is useful. Even then, the least they could do is use factual language.

                          This is always something that gripes me reading the recurring fossil evangelism: git criticism is interesting and having a different view should give perspective, but the fossil author always use this kind of language that makes it useless. Git adapts to many kind of teams and workflow. The only thing I take from his comparison is that he never learnt to use it and does not want to.

                          Now this is also a very valid criticism of git: it is not just a turn-key solution, it needs polish and another system needs to put forth a specific work organization with it. That’s a choice for the project team to make. Fossil wants to impose its own method, which of course gives a more architected, polished, finish, but makes it impossible to use in many teams and projects.

                          1. 2

                            Maybe they don’t care about widely promoting fossil and just created that page so people stop asking about a comparison?

                          2. 5

                            One of the main reasons for me for not using Fossil is point 2.7 on that list: “What you should have done vs. What you actually did”. Fossil doesn’t really support history rewrites, so no “rebase” which I use nearly daily.

                            1. 2

                              This is also a problem with Git. Like you, I use rebase daily to rewrite history, when that was never really my objective; I just want to present a palatable change log before my changes are merged. Whatever happens before that shouldn’t require something as dangerous as a rebase (and force push).

                              1. 4

                                I don’t think it makes any sense to describe rebases as ‘dangerous’, nor to say that you want to present a palatable change log without rewriting history unless you’re saying you want the VCS to help you write nicer history in the first place?

                                1. 2

                                  Rebase is not dangerous. You have the reflog to get back to any past state if needed, you can rewrite as much as you need without losing anything.

                                  Now, I see only two ways of presenting a palatable change log: either you are able to write it perfectly the first time, or you are able to correct it. I don’t see how any VCS would allow you to do the first one. If you use a machine to try to present it properly (like it seems fossil strives to do), you will undoubtedly hit limitations, forcing the dev to compose with those limitations to write something readable and meaningful to the rest of the team. I very much prefer direct control into what I want to communicate.

                                  1. 2

                                    I think whether rebase is dangerous depends on the interface you are using Git with. The best UI for Git is, in my opinion, Magit. And when doing a commit you can choose from a variety of options, one of them being “Instant Fixup”.

                                    I often use this when I discover that I missed to check-in a new file with a commit or something like that. It basically adds a commit, does an interactive rebase, reorders the commits so that the fixup-commit is next to the one being fixed and executes the rebase pipeline.

                                    There are other similar options for committing and Magit makes this straight-forward. So much, indeed, that I have to look up how to do it manually when using the Git CLI.

                                    1. 4

                                      I prefer to work offline. Prior to Git I used SVK as frontend for SVN since it allowed offline use. However, once Git was released I quickly jumped ship because of its benefits, i.e. real offline copy of all data, better functionality (for me).

                                      In your linked document it states “Never use rebase on public branches” and goes on to list how to use rebase locally. So, yes, using rebase on public branches and force-pushing them is obviously only a last resort when things went wrong (e.g. inadvertently added secrets).

                                      Since I work offline, often piling up many commits before pushing them to a repo on the web, I use rebase in cases when unpushed commits need further changes. In my other comment I mentioned as example forgotten files. It doesn’t really make sense to add another commit “Oops, forgotten to add file…” when I just as easily can fixup the wrong commit.

                                      So the main reason for using rebase for me is correcting unpushed commits which I can often do because I prefer to work offline, pushing the latest commits only when necessary.

                                      1. 2

                                        In addition to what @gettalong said, keep in mind the original use-case of git is to make submitting patches on mailing lists easier. When creating a patch series, it’s very common to receive feedback and need to make changes. The only way to do that is to rebase.

                                  1. 6

                                    There are vulnerabilities that can only exist in memory-safe languages (e.g. use of eval on untrusted inputs; eval tends to only exist in very high-level languages, which are all memory-safe)

                                    eval exists for C programs for any system that can shell out to gcc. I’m not sure if this is a nitpick or not given the prevalence of gcc. Hopefully these kinds of security issues are rare for any programming language.

                                    1. 9

                                      (Author here)

                                      eval exists on x86, just JMP to a buffer! (This worked better before no-exec was a ubiquitous thing). Still, I think it’s important to recognize that as an empirical matter, eval of source code doesn’t exist in memory-unsafe languages in a meaningful sense.

                                      An early draft of this post said that I honestly wasn’t sure what belonged in this category :-)

                                      1. 3

                                        “Static typing where possible, dynamic typing where needed” argues that eval is quite pervasive:

                                        Many people believe that the ability to dynamically eval strings as programs is what sets dynamic languages apart from static languages. This is simply not true; any language that can dynamically load code in some form or another, either via DLLs or shared libraries or dynamic class loading, has the ability to do eval. The real question is whether your really need runtime code generation, and if so, what is the best way to achieve this.

                                        This is not just a theoretical argument, dynamic compilation and class loading is not unusual in Java, it’s the way Ruby’s current draft JIT works and Haskell wraps its own compiler in hint. Varnish’s config language is working in similar ways.

                                        I still agree it’s rather fringe, but not unheard of and I’ve seen it considered in practice.

                                        1. 2

                                          Your paragraph on vulnerabilities specific to memory managed languages mostly talks about “unsafe deserialization”. I don’t see how unsafe deserialization is limited to memory-safe languages. Is that what you meant to imply?

                                      1. 12

                                        Point of attention for all the readers: remember that the demographics of StackOverflow are not necessarily representative of the whole sector. Some numbers clearly don’t match (for example the ones about freelance work/precarious work in US). So take all these numbers as representative of a very specific sub-demographic.

                                        That said, the viz is dope.

                                        1. 8

                                          That said, the viz is dope.

                                          Two changes I think would make it more dope:

                                          1. Easy access to the actual question that was asked: Sometimes the text summary makes you wonder what the exact wording of the question was.
                                          2. Quite a few of the graphs just show an average, when they could also show some indication of the spread of data. For example, instead of the bar chart showing median salary for each language, I think it would be more interesting to see a ‘box and whisker’ graph, or even an actual frequency distribution for each language.
                                        1. 2

                                          My pipeline for ebooks is a Kobo reader (Forma), and Calibre with the Obok DeDRM plugin (running on Linux). I buy a book on Kobo.com, it automatically downloads to my reader, and next time I connect the reader to my computer I can import the book to Calibre and strip the DRM with a single click.

                                          For music I buy CDs and rip them myself or less frequently buy DRM-free MP3s. For movies I buy Blu-rays or DVDs and use MakeMKV and Handbrake to rip and compress them to my Plex server.

                                          Building a DRM-free media library is not terribly prohibitive if you are somewhat technical and willing to put in the work, but DRM-locked ecosystems and streaming services make it so appealing to the casual masses who are more concerned with ease of availability that I don’t see them ever going away or getting better. On the contrary I think it’s more likely that artists and movie studios will shift more and more to digital-only releases and we’ll slowly lose the option to buy and rip physical media.

                                          1. 1

                                            somewhat technical and willing to put in the work

                                            With drm-free e-books, I just buy them and read them. I have already put in the work to earn the money to buy books. I do not wish to put in any more work to maybe be allowed to read the book I’ve bought. Even if I’d already got the DeDRM thing working, I would never be sufficiently confident that it would definitely work on a new drm-encumbered e-book I was considering buying.

                                            The worst thing about the whole DRM experience is that the two main groups that it punishes are the people who pay for a legitimate copy and have to jump through hoops to read it, and the authors, who lose sales because even when you are willing to spend the money, ‘piracy’ is a much easier, safer, route to getting something you can actually read.

                                          1. 12

                                            I won’t buy a DRM’d eBook, that’s for sure. I’ve bought one or two DRM-free ebooks before. A few weeks ago I had a similar experience. I wanted to read a particular book and I bought it knowing it was DRM’d. I couldn’t get the thing to work after spending two hours with tech support. I ended up cancelling the order.

                                            My wife recently bought a kindle and reads lots of stuff on it, and she understands that she doesn’t really ‘own’ the books, but the value is just to tear through lots of content, not to keep the pages forever. I suppose if you understand what you’re buying, and it works, it’s fine.

                                            But really, DRM sucks. I thought we nailed this down several years ago?

                                            1. 2

                                              Calibre is a godsend for us Kindle owners.

                                              1. 2

                                                I wanted to read a particular book and I bought it knowing it was DRM’d. I couldn’t get the thing to work after spending two hours with tech support. I ended up cancelling the order.

                                                I’m impressed that you managed to cancel the order. Most e-book stores I’ve seen have a policy along the lines of, “you’ve downloaded it, you can’t ‘return’ it or prove you haven’t kept a copy, so we won’t refund you, regardless of whether you are actually able to access the content you supposedly ‘bought’”. I guess maybe devoting two hours of your time to the issue convinced them that it really didn’t work.

                                                Like you, I never buy drm-encumbered e-books. I don’t think I have the necessary devices capable of running the stuff required to open them (As I understand it, the calibre workaround referred to in the article requires windows or OS X). By contrast, I happily buy drm-free e-books from publishers such as Manning.

                                              1. 2

                                                I mostly use the “News Downloader” plugin in KOReader on my e-reader.

                                                1. 2

                                                  How do you deal with padding on websites and other messy stuff, do you just go over the pages you do not care about?

                                                  I really wish to use it more.

                                                  1. 2

                                                    It works better with some feeds than others, so it really depends on individual feeds. I have a couple of feeds where it has to download the full article (because the feed only has a summary), and it ends up dragging in a load of extra stuff (comment sections, footers, etc.). However, at least for the feeds I’m reading, it tends to appear at the end, so I when I get to the end of the article text, I just stop.

                                                    Over time, I’ve probably ended up whittling the feeds I read down to a subset that works well with KOReader (I do use other feed readers on other platforms). For content which works well on an e-reader, (text and pictures) it works really well: I get to read the articles away from all the distractions that you have on a computer or smart phone, on a screen that’s comfortable to look at, with my favourite font (alegreya :) ), and all the style tweaks that KOReader provides (once downloaded, articles are written as epubs, so they are treated just like any other book).

                                                    The best solution is for the publisher to include the full article text (and nothing else) in the rss or atom feed. Many blogs do do this, and it makes it really simple for the client to produce something which is 100% useful content with zero effort (make sure to set download_full_article=false for feeds like this, so it doesn’t try to download the article instead). When the feed only contains a summary, we have to fetch the actual content from the website. As long as the website is fairly simple and focusses on the content, this still works well in most cases, but can add a little bit of noise with maybe a header, a footer and a comment section. As sites get more complex and add more non-content around and in between the actual content, it gets increasingly hard to separate the content from the garbage.

                                                    Taking “modern” real-world html and trying to distill out the actual content is a very difficult problem, but there may be more we can do in future (We have per-feed options, so a solution wouldn’t have to work perfectly for every single feed for it to be included: It can be off by default, or able to be turned off for the odd site where it fails, as long as it was an improvement in enough cases to justify any added complexity. Practical suggestions and/or code are always welcome!).

                                                    1. 2

                                                      I can recommend the approach by miniwebproxy and filtering certain CSS elements for limiting content.

                                                      Thanks for your reply.

                                                      1. 2

                                                        Thanks for the link! I’ll have a play with it later. From the description, it looks like it might also deal with the other issue, where what appears to be a simple image or code snippet when viewed in a browser is actually some complex dynamic-loading thing, which ends up being omitted from the output.


                                                          I actually implemented such patch to KOReader now so the issue is solved now and will be in 2020.06 version :).

                                                1. 1

                                                  We also have if and do:

                                                  if foo
                                                  do this

                                                  Note, if you will, that these special operators interpret their arguments in a non-function normal-order way. They interpret their arguments syntactically!

                                                  I don’t quite understand what this sentence means, and why if is a special operator instead of a regular function which takes 3-arguments.

                                                  1. 4

                                                    Because you have to delay the evaluation of your then or else statement depending on the result of the condition. If statements can be implemented as a function if the language uses lazy evaluation, but with eager evaluation you have to delay the evaluation of the branches with a macro or something.

                                                    1. 2

                                                      Good point. I was sub-consciously thinking in terms of lazy evaluation, while this language is strict.

                                                  1. 12

                                                    FWIW, while I do like this list, I feel compelled to note that Fog Creek, as late as 2014, did not actually do 2, 3, 5, 6, or 7. Sure, any given project, at any given point, might’ve, but they certainly weren’t part of the culture. I have no idea what Glitch does these days on that front.

                                                    1. 3

                                                      So, did anyone bring this up at the lunch table or anything?

                                                      “Hey about that Joel test, people on the internet actually think we do this stuff!”

                                                      [Hearty laughter all around table]

                                                      “Yeah those yokels will believe anything”

                                                      Maybe a “fire and motion” tactic to bog down would be competitors?


                                                      1. 17

                                                        Yes and no.

                                                        First, “we didn’t do them” does not mean “we didn’t want to do them.” We, like most teams in real life, valued and tried to do things, but might fail because the reality of shipping got in the way. So, while I was there, at any given point, you probably could’ve found us doing all twelve of these…but not all in one project. And when one of these points slid by for long enough, it gradually dropped off the radar.

                                                        Take FogBugz. FogBugz routinely had one-step builds, but not one-step deploys. Deploys, for a long time, consisted of manually, locally building a COM object, copying it to each web server, and registering it by hand with regsrv32. This continued after FogBugz On Demand was launched, and we did have outages from this. (One I remember specifically was Copilot getting taken down one day because someone had reordered database columns in SQL Server, by hand, for better aesthetics. They were in there in the first place because Copilot’s schema management at the time could only add columns, not delete, and they wanted to delete some extraneous ones.) Does that count as a violation of making a build in one step?

                                                        Copilot never had daily builds, even when Joel was directly overseeing us. I don’t think Kiln did, either. But we had one-click builds and would deploy fairly often. That’s definitely a literal violation of making daily builds, but maybe it doesn’t count? (Especially when I could trivially have cron’d daily builds for both!)

                                                        I could go on. Initial phases of projects often had “specs,” but they were rarely followed, and the finished project was often wildly different. Specs were rarely updated as the product was, so the result is that they were basically frozen-in-time musings about what we thought maybe things should look like. I actually have the Kiln 1.0 Spec in my office, and just looked at it, out of curiosity. A lot of these features did ship, but quite a few worked differently, a few so differently I’m not entirely sure it counts. And I don’t remember this spec being updated once we got going. (Something kind of evidenced by the fact that it was distributed on paper, in a binder, to the team.) Likewise, we had testers, but they couldn’t test the entire project. We kind of dogfooded, which kind of avoided this, but our dogfooding was done on a special server running a special build of the product that was built in a special way, and so its bug collection would frequently be different than what customers saw. And so on and so forth.

                                                        I am not saying I don’t think the Joel Test has value. I actually think it does: specifically, I think it’s a great list of some important things I sure hope most dev teams are trying to do. (Except item 11. That can go die in a fire.) My issue with the Joel Test is that, in real life, I have never seen any single company actually pass. That’s fine if it’s an aspirational target, but too often it’s instead used as a way to judge. (StackOverflow Careers, in fact, at least used to do this explicitly, showing the Joel Test rank for each company. Fog Creek inevitably had a 12 because of course it did, incidentally.)

                                                        I think the only one of these I genuinely found comical, and I do remember making fun of, is “Fix bugs before you make new ones.” If we actually did that, FogBugz 6 for Unix would never have shipped. “Keep your bug count from climbing too high” was definitely A Thing™, but the reality is that if you can ship, I dunno, file transfer in Copilot 2, but you still have ghosting issues, you’ll ship it.

                                                        1. 5

                                                          This is such a good comment that provides that provides a foundation for empathy for teams that try to perpetually improve their own process, even while publishing publicly about their process. Sometimes “the grass is greener” even applies to a software shop you might have idolized in your youth. As I did, for Fog Creek. Thank you for sharing these details!

                                                          I feel like “The Joel Test” was a real accomplishment at the time. These days, its lasting impact is much more “meta” than “concrete” – simply the idea that you should evaluate the “maturity” of a software team by the ubiquity of their (hopefully lightweight) processes, and the way it assists programmers in shipping code. I could even make a “2.0” version right now, modernized for 2020. I left some unchanged.

                                                          1. Do you use git or another distributed VCS and is it integrated with a web-based tool?
                                                          2. Can you run any project locally in one command?
                                                          3. Can you ship any project to production in one command? (Or, do you use continuous integration?)
                                                          4. Do you track bugs using an issue tracker to which everyone has read/write access?
                                                          5. Do you tame your bug count weekly?
                                                          6. Do you have a real roadmap and is there rough team-wide agreement on what it is?
                                                          7. Does the value of a feature get elaborated in writing before the feature is built and shipped?
                                                          8. Do programmers have quiet working conditions?
                                                          9. Do you use the best tools money can buy?
                                                          10. Do you have separate alpha/beta/staging environments for testing?
                                                          11. Do new candidates write code during their interview?
                                                          12. Do you watch users using your software and analyze usage data?
                                                          13. Does your team dogfood early builds of the next version of your software?
                                                          1. 3

                                                            (Except item 11. That can go die in a fire.)

                                                            Are you talking about the specific interviewing practices that Joel recommends (e.g. his Guerrilla Guide), or writing code during interviews at all? I do think whiteboard coding should die in a fire (even for people who, unlike me, can actually do it; see my profile). But writing code on an actual computer seems a lot more reasonable.

                                                            1. 3

                                                              I don’t like “whiteboard” coding interviews, but I do like basic coding interviews with a real development environment and think they should be a requirement for programming teams.

                                                              1. 3

                                                                White-boarding should definitely die. But I’m not sure I like coding in real time, either. Code submissions sure—especially if there’s a good write-up of your approach. But coding on a foreign laptop with someone staring at you is not how most people code, and I’ve seen great devs flail in this situation, and (when testing this technique) reject candidates pass. So the signal to noise just seemed really, really low.

                                                                Nowadays, I do a take-home and then do behavioral and structural interviews. That seems to work far more reliably.

                                                                1. 2

                                                                  interviewing practices that Joel recommends (e.g. his Guerrilla Guide)

                                                                  I clicked through to that when I read the article, and I have to say I disagreed with a lot of what I read. For example:

                                                                  Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it. In some situations it may be completely impossible to fire anyone.

                                                                  Maybe this is different in the US. Here in the UK, the norm is to start everyone with 3 months probation, with a week’s notice during that period. If during the 3 months you decide they’re not a good hire, you just let them know, pay them their weeks notice (you wouldn’t want them to actually work for the remainder of the week) and you’re done. The risk of litigation is very low, unless you do something stupid. There is a small cost associated with trying someone out and letting them go, but you get to find all the good candidates who haven’t devoted years to studying interviewing.

                                                                  recursion (which involves holding in your head multiple levels of the call stack at the same time)

                                                                  In my mind, that’s exactly the opposite of what recursion is about. Using recursion allows you to take a problem and focus on a tiny bit of it, without too much big picture thinking. For example, if you’re recursing over a tree, you don’t have to worry about the different levels of the tree: you just focus on what to do with the current node, and pass the remaining subtree on to the next level. As long as you end up with a base case (which is usually fairly obvious) eventually, there really isn’t a lot of complexity involved.

                                                                  Look for passion. Smart people are passionate about the projects they work on. They get very excited talking about the subject. They talk quickly, and get animated. Being passionately negative can be just as good a sign.

                                                                  I would say a bit of passion is good, but people who are too passionate have difficulty working as part of a team. They want it to work just so, and don’t appreciate their manager or the customer telling them that it needs to work a different way. You don’t want to work with someone who is sulking or trying to undermine things because they didn’t get their way on a subject they care deeply about. Someone whose main motivation is their passion may also lose interest if assigned tasks which are necessary but not directly related to their area of interest.

                                                                  The article also seems to change its tone half way through: At the start, he’s determined to only hire the superstars. Later, he wants the far more modest “smart and gets things done”. It depends on your definition of superstars, but often the term is used for someone who can produce amazing work, but isn’t really a team player and can’t deal with the more mundane aspects (like getting stuff done).

                                                                  1. 3

                                                                    Joel’s posts are relatively dated at this point. When he wrote the guerrilla guide, it was definitely uncommon to have probation periods in the UK and these are a relatively new phenomenon.

                                                                    At the time, getting rid of bad hires was incredibly difficult in the UK and EU, compared to the US. I’d consider bad hires to not be doing something egregious like assaulting other staff, but are those with behaviour that impacts forward progress. The type that would require quite a long chat with a lawyer to explain. Fog Creek’s was based in New York which may also have influenced Joel’s writing. Different states have different provisions in employment law. Without research, I suspect that people hired in NY have much more protection than someone hired in Florida, or even here in Pennsylvania. The canonical example is California which has significant protections for employees.

                                                                    I’m glad you picked up on the “superstars” part. I don’t know if Joel would consider this a mistake, but his writing has been misinterpreted by many. This spawned many articles later on which fed into the cult of the rockstar programmer. I don’t think he had a desire to do this, but it’s interesting to see which ideas have proliferated and how the modest tones are lost.

                                                                    People also fail to look at Joel’s environment and culture at Fog Creek. This is not the universal environment or situation where programmers will exist. Some will be working in academia, others may be running a rivet company (a friend of mine.) The Fog Creek approach can’t just be applied in whole to these situations. There is now a much broader range of material on managing programmers, but it was relatively limited back in the early 2000s, especially if you could exist mostly in a technical bubble. There were some great books on managing creative people (think: design and advertising) that applied to programmers in a lot of ways, but these were easy to ignore. Programmers had no exposure to interview training. Now there is much more discourse on various options from hiring through to deployment of software.

                                                          1. 4

                                                            Very interesting that it’s faster than the previous version written in Haskell. I’d love to hear more about what the contributing factors were to the performance increase.

                                                            1. 25

                                                              This question has come up a few times, so I’ve rattled off a thing that I think covers the main factors: https://www.type-driven.org.uk/edwinb/why-is-idris-2-so-much-faster-than-idris-1.html

                                                              tl;dr: it turns out it’s much easier to improve performance spectacularly when it starts so bad :)

                                                              1. 2

                                                                While you are here, is the Type Driven Development book still relevant? It’s from 2017 so I am somewhat skeptical how relevant it is now.

                                                                I’d like to give Idris a go :)

                                                                  1. 3

                                                                    Once Idris 2 is released, do you have any plans to update the book / produce a new edition?

                                                                1. 1

                                                                  Thank you!

                                                              1. 3

                                                                It contains all the artful elements of FreeDesktop code, including the lack of knowledge what actually does and 20 ways of configuring, all of them barely working.

                                                                I think there’s at least one word missing from this sentence. Did you perhaps mean, “including the lack of knowledge of what it actually does”?

                                                                1. 2

                                                                  Yeah, it seems so. It’s rephrased now, I started proofreading quite recently so there are still some mistakes.

                                                                1. 5

                                                                  In the original declaration by the GNOME project that they would fight back (https://www.gnome.org/news/2019/10/gnome-files-defense-against-patent-troll/), they announced that they would do three things:

                                                                  First: a motion to dismiss the case outright. We don’t believe that this is a valid patent, or that software can or should be able to be patented in this way. We want to make sure that this patent isn’t used against anyone else, ever.

                                                                  Second: our answer to the claim. We don’t believe that there is a case GNOME needs to answer to. We want to show that the use of Shotwell, and free software in general, isn’t affected by this patent.

                                                                  Third: our counterclaim. We want to make sure that this isn’t just dropped when Rothschild realizes we’re going to fight this.

                                                                  So I guess this means that point 2 is achieved. And point 1 might be good too because they got granted a release for all OSI licensed software (though I guess proprietary software could still be attacked by this patent?). But what about point 3? Did the Gnome Foundation drop the counter claim at the same time?

                                                                  1. 2

                                                                    I’m wondering the same thing. This has also got me wondering whether, given infinite resources, one can proactively hunt down dodgy patents and get them invalidated, even if the holder isn’t currently using them in any way? My understanding is that you can do this in most jurisdictions, but presumably it’s better to focus resources on patents which are actively being used by trolls, simply because so many (potentially) invalid patents are granted.

                                                                    1. 2

                                                                      Point 3 is covered by Rothschild agreeing to not sue any open source project on their patents.

                                                                      Further, both Rothschild Patent Imaging and Leigh Rothschild are granting a release and covenant to any software that is released under an existing Open Source Initiative approved license (and subsequent versions thereof), including for the entire Rothschild portfolio of patents, to the extent such software forms a material part of the infringement allegation.

                                                                      This is a hard blow.

                                                                      1. 2

                                                                        But that crap is still on for proprietary software. The settlement didn’t invalidate the patent as far as I understand.

                                                                    1. -9

                                                                      Author: Let’s rewrite everything in Nix!

                                                                      Also author: I don’t really understand Nix

                                                                      1. 12

                                                                        Also author: I don’t really understand Nix

                                                                        Perhaps you could highlight the parts of the article which you believe are incorrect or show lack of understanding, so that we can all learn something…

                                                                      1. 9

                                                                        Oh, is this a first “mainstream”/high-profile company using Nix and doing a public “coming out” about this? Or did I miss some earlier ones?

                                                                        1. 7

                                                                          There was the retailer Target, but I wouldn’t say they had a public coming out - https://github.com/target/lorri.

                                                                          1. 3

                                                                            This was my thought as well! Now nix will become a docker! Yay =)

                                                                            1. 18

                                                                              I say this as somebody who loves Nix. But many people will struggle with Nix’ learning curve and hate it. For all Docker’s failings, do not underestimate how much it is loved because you can just stash your a bunch of Unix commands in your Dockerfile and that’s mostly it.

                                                                              My worry is Nix getting too much exposure before all the rough edges are smoothed out. Though of course, exposure also brings in new contributors, which may help with that.

                                                                              1. 7

                                                                                So far our tooling takes care of (almost) everything and the magic happens under the hood. The developers didn’t have to directly interact with Nix yet. We’ll see how things change once developers need to write their own configurations/derivations in Nix. The developer behind all this effort has also released a series of videos introducing Nix in case you’re interested! https://www.youtube.com/watch?v=NYyImy-lqaA&list=PLRGI9KQ3_HP_OFRG6R-p4iFgMSK1t5BHs

                                                                                1. 2

                                                                                  The documentation situation is a little frustrating. There’s actually too much documentation. It’s easy to accidentally get obsolete or out-of-order information. For example, the nix installer is totally broken on MacOS Catalina because root directories are locked down and it can’t manage /nix the way it wants. There’s a work around, but it’s only documented in the ~2000th comment on a github issue…

                                                                                  Yes, I intend to make a contribution to correct this, but:

                                                                                  1. I have to figure out how to do that for this project, which I’m new to
                                                                                  2. I have to confirm the solution I found is the temporary official one
                                                                                  3. This has been broken for a year and a fix is coming “soon,” so is it even worth documenting the current process?
                                                                                  1. 2
                                                                                    1. 1

                                                                                      This also works! https://dev.to/louy2/installing-nix-on-macos-catalina-2acb

                                                                                      curl -L https://raw.githubusercontent.com/NixOS/nix/d42ae78de9f92d60d0cd3db97216274946e8ba8e/scripts/create-darwin-volume.sh | sh
                                                                                      curl -L https://nixos.org/nix/install | sh

                                                                                      Although relying on a script that is no longer on master is definitely not a good idea, but… /shrug

                                                                                  2. 1

                                                                                    I strongly dislike docker containers and the whole ecosystem around dockerhub.

                                                                                    However, the syntax of Dockerfiles is extremely beautiful. Probably my favorite file format ever. It is 100% understandable by anybody who has never heard about docker. If you ignored docker at all, you could still run dockerfiles “by hand” to reproduce the systems they describe. Sadly, the same thing cannot at all be said for nix files, in which quite a lot of implicit things happen and they are punctuation-ridden for unclear reasons.

                                                                                    EDIT: I would love if a “lite”, less powerful, nix file format existed, where the graph is linearized and it has a dockerfile like feel (maybe using the exact same syntax).

                                                                                    1. 9

                                                                                      As I understand it, the Nix expression language is not an essential part of the Nix ecosystem. As long as you generate valid .drv files, you can theoretically use any language, and still manage your system with Nix. I believe (though I’d love it if someone can confirm) that guix, while using guile scheme for expressions, still outputs the same .drv format[1].

                                                                                      Another example I know of is dhall-nix, which allows you to translate Dhall expressions into Nix expressions[2].

                                                                                      So in theory, as long as your config language of choice can be mapped to the Nix expression language, or a subset of it which you wish to use, you may be able to write a translator which converts it to a Nix expression. Alternatively, you can write a compiler which outputs .drv files directly.

                                                                                      [1] https://guix.gnu.org/manual/en/html_node/Derivations.html

                                                                                      [2] http://www.haskellforall.com/2017/01/typed-nix-programming-using-dhall.html

                                                                                      1. 1

                                                                                        It really depends what you mean with essential. Without Nix the programming language, Nix is much closer to being yet another Hermetic build system.

                                                                                        I try to make that point more elaborately here :


                                                                                      2. 1

                                                                                        Without knowing a lot about nix, having a graph instead of a linear set of instructions makes caching much more powerful. I’ve sat through many painfully long Docker builds because the thing I had to change was coincidentally located early in the Dockerfile. If every single thing in your image is immutable and created in a static order, I would imagine that build times would be completely unmanageable.

                                                                                1. 4

                                                                                  Is there some way to make it actually search for what you write? It seems to handle a search for operator< about as well as Google (I.e treating it as a search for operator).

                                                                                  1. 2

                                                                                    I was wondering this. I suspect that since it depends on bing for the results, it can only deal with what bing considers to be a valid word/token.

                                                                                    Sometimes it’s really frustrating trying to find out what some obscure operator does in a language or framework, because you can’t search for something like ?&, and if you try to describe it in words, there are multiple different ways to describe it, many of which won’t appear on pages which document it (e.g. pages will write “The floob operator, ?& floobs things.”, not “The floob operator, question mark ampersand, floobs things.”). Some dedicated language documentation search engines can handle operator searches. e.g. Haskell’s hoogle: https://hoogle.haskell.org/?hoogle=<?>