1. 18
  1.  

  2. 6

    I read this a while back and liked it, but I’m not sure I’d use it as a teacher. I was trying to think of an alternative model that doesn’t necessarily test the students' understanding of the grading system. Although the model presented here is nice for students in a decision making class, is it the best model for an art history class? Should I really fail somebody for getting a 100% confidence answer about Picasso wrong? It seems more likely the student failed to understand the test criteria than the subject matter.

    As an alternative, I thought of weighting answers based on how many other students get it right or wrong. Each right answer is worth the % that got it wrong, and each wrong answer is worth negative the % that got it right.

    So, if question X is answered 90% correct, 10% incorrect by the class, we determine it’s easy. Get it right, earn ten points. Get it wrong, lose 90 points. Getting an easy question right doesn’t prove much, but getting it wrong does.

    Or if only 20% of the class gets question Y right, that’s a hard question. You deserve 80 points for getting it right.

    If a question is blank, we’ll count that as wrong for purposes of determining its value, but will neither award nor subtract points from the student’s score.

    1. 4

      Or if only 20% of the class gets question Y right, that’s a hard question. You deserve 80 points for getting it right.

      Not sure if this is necessarily correct. Sometimes it just means that the question was poorly-worded/unclear/covered an obscure part of the course material. While this does mean the question is difficult, I’m not sure if it’s the sort of difficulty that should be rewarded with lots of points.

      1. 4

        Isn’t that what we’re looking for? Students who know the obscure material (and possibly those who can decode poorly phrased questions) are likely most correlated with students who best understand the material. In isolation, one question can be guessed, but in aggregate I think this is a good measure of who knows the material. It doesn’t assign partial credit on a per question basis, but by dynamically weighting questions, it expands the range. In particular, it differentiates the 95/100 from one another.

        The problem with the original proposed method is it doesn’t really measure knowledge, it measures one’s ability to accurately estimate one’s own confidence in knowledge. Certainly a useful skill, but beyond the scope of many classes. I’m not looking forward to explaining the rules to a class of second graders.

        There’s a lot of philosophy here, but I don’t like testing techniques where devising an answering strategy can be more rewarding than studying the material. As the optimal strategy begins to deviate from “answer every question to the best of your ability” I think the test is increasingly unfair. Having contestants wager before answering a question adds excitement and strategy to Jeopardy, but I don’t think it’s a good teaching tool.

    2. 3

      I’m in favor of alternate education systems that aren’t focused on ranking students in comparison to each other.

      However well-intentioned, this idea seems to demand students spend even more time and energy being ranked than in the status quo. I therefore dislike it. :)