1. 6
  1.  

  2. 2

    Point 3 “It’s a chance to advocate for the kind of work you want to see more of and influence the direction of the important publishing venues in your field. I don’t want to say anything specific, or I’ll break my anonymity, but I believe I really have been able to use my influence as a reviewer to make an impact on what the program looks like at publishing venues I care about, and that’s exciting!”

    AKA. Politics.

    1. 3

      Using your influence as a reviewer to help recognize good work, and to improve the quality of papers you review, is not politics – it’s the responsibility of everyone in academia. When choosing a program committee, you hope for the members to be people who will work to make the program high-quality, otherwise they wouldn’t be doing their jobs.

      1. 2

        This is a problem in academia that academics have become blind to. A reviewer’s job is to catch mistakes. Period.

        Not judge if the work is “sexy”, “worthwhile”, “path-breaking” or any other number of buzz-words which are the same as “culture-fit” in an interview. Code words for bias and all sorts of politics. It takes many years to figure out the worth of a piece of work. And it takes many people. Not individual, self-appointed gate keepers guarding territory, scaring off competitors or pushing down work they just don’t like the look of.

        Unfortunately such activities are considered normal and have been given fancy sounding names.

        This is a terrible state of affairs.

        1. 3

          I don’t really agree with that. Good research is not just the research with the fewest errors, otherwise trivial research restating obvious and correct conclusions would be best. There is a bit of a tendency to go in that direction, because it’s safe. There are a lot of pretty boring conferences and journals that publish miscellaneous incremental work and restatement of already existing work, and take basically no chances on anything else, using “minimize errors and make sure everything seems as rigorous as possible” as the sole criterion.

          But for me, a reviewer’s primary job is to determine if this research advances the field. Is the research on-topic for the conference or journal? A conference that is “all research, in every field”, with some people presenting physics papers and other people analyzing Renaissance art, isn’t a useful gathering, so some amount of topicality is useful. It’s not always obvious how to draw the boundaries between disciplines and conferences and journals, so some discussion is inevitable there (and probably shifts over time as different areas grow and wane). Are the results interesting and at least potentially useful to anyone? Are the results actually an advance that adds to our knowledge of something? Is the question the research is answering coherent, and ideally one that someone other than the author cares about? This is how science mainly works, not in pedantic error-finding in the style of medieval scholastic philosophy.

          Also, and this constitutes a pretty big part of where the arguments come in, what is even a “research method”, and which ones are valid for your subject matter? Is someone using a research method that isn’t sound an error, of the kind a reviewer should be rejecting papers for? How do you determine if a research method is sound? Different research areas think they understand what constitutes a valid research method… except those understandings are entirely different, because there isn’t actually any consensus on “the scientific method” once you get beyond really high-level textbook presentations of it. So one thing that can be useful to do as a reviewer is make yourself available to review papers that use methods you have some expertise in, so they get a look from a reviewer who is at least sympathetic to the broad approach. Short of coming up with a Grand Unified Theory of Science, and set of methods everyone accepts in every field, I don’t see a way around social processes here, since ultimately science is mostly a process of improvising methods, testing them out on various problems, and provisionally getting them accepted within parts of scientific culture.