1. 10
  1.  

  2. 10

    Totes in favor of getting scientists to write higher quality, better tested code! In part because I don’t think this is the worst case:

    However, I can tell from personal experience that badly written code tends to break. Break a lot and unexpectedly. […] The code runs without producing an error and the result displayed on your screen is utter nonsense.

    The worst case is that you get a result that is sensible but wrong. Then you’ll use it and get wrong results in your paper. Results like, say, austerity is good policy.

    1. 2

      This concerns me quite a bit too. Researchers are pretty good at peer reviewing each others’ experimental designs, methods, and reasoning, but I’ve never heard of peer reviewers scrutinizing code. And as the article points out, a lot of researchers are novice programmers. (I know several who are self-taught; Python and R are favorites.) I’m unsure how much exposure they get to norms of other development communities.

      What concerns me most is that in order to validate a piece of software (e.g. write tests for it) you need to know what the expected outcome is. If you’re doing scientific modeling, you don’t necessarily know what to expect. That’s why you’re writing the model in the first place! Ideally you validate the model against known data first and squint at it to make sure it’s within bounds, but automated tests that deal with randomized scenarios and still aren’t flaky take skill to write.

      Maybe we can get some kind of partnership going where researchers teach more experienced programmers about the needs of research computing, and the experienced programmers help with code reviews (primarily looking at correctness issues.) And of course you’d want people who are in the intersection of those two groups leading the effort. :-)

      1. 4

        I kind of sort of do it for a living, and as Hillel puts it, yes, code that crashes worries me far less than code that gives out the wrong results.

        What I validate is the relationship between the modeler’s intent and his code. The methodological soundness of his intent is not my department. So to test it I use frozen inputs and check the outputs. Out of sample testing to test the model ? That’s done by the lady two cubicles away from mine.

        1. 2

          Can you tell me more about what your position entails? I assume you’re not getting paid to review models in academic publications… maybe working in some research department somewhere?

          1. 2

            My company produces wind power forecasts for prospective and operating wind farms. Meteorologists design the models, and evaluate the models. I just make sure their code matches their intentions.

    2. 3

      Scientists IME tend to be smart and don’t mind grinding away at a problem. This can lead to terrible code.

      When I was working on a scientific codebase, the parts written by scientists were immediately obvious because they were so incredibly complex. Tons of global state, a lot of repetition, functions running 100s (or 1000s) of lines.

      I tend to have to write relatively clean code (IMO) because I’m not that smart and I’m easily annoyed. I can’t keep track of 53 global variables whose state is constantly mutated, and if I have to change something in more than 2 places, it bugs he hell out of me.

      1. 3

        It’s not that they’re smarter, it’s that the complexity is a representation of the domain they’re working in, which is at the top of their minds at all time. You, meanwhile, have to dive in, figure things out, fix things, then hop to someone else’s code base representing some other domain.

      2. 2

        I’ve witnessed this a few times as well, but I don’t want to call anyone out on it. Many of the scientists I’ve known have had great ideas, hack out a solution, get the publication and turn it over to the public. Relevant thread on something I’ve been working on to clean up the problem in my space.