As it is the season for grading, it is hardly a surprise that there have been a lot of posts about grading lately (check out our previous post, Faculty Focus, College Ready Writing, and Hack [Higher] Education). One of the big issues that comes up time and time again is efficiency. How do we make grading as efficient as possible? Grading takes time, especially if you are leaving feedback for your students. Being efficient is also, sometimes, about being consistent. It would be most efficient if we could be consistent and provide the same (appropriate) feedback for the same error and the same level of paper. If two students are making the same mistake or the same error, then they should receive the same feedback. While a large stamp is tempting sometimes, most of us still find ourselves writing out comments (or typing them for those who have moved to electronic grading). Is that really efficient?
There have been many solutions proposed. One is to use robot graders, even for essays. As an instructor, I find I’m a bit torn like Lee Bessette of College Ready Writing. Would it help us be more consistent and efficient? Yes, some answers are right or wrong. But what about the subjective answers? Or the times when I flex my grading a bit because one answer is almost right but not quite so it does not get full marks, but the next one is also almost there and I give it the full marks to balance out the previous one. Would it be better to just never give students any benefit and mark strictly according to what is there the way a robot would? Essay grading is a whole other issue. Yes, it is subjective, but personally I feel communication as a whole is subjective and there is no way to fully quantify it.
Another suggestion from of Faculty Focus who refers to an article by A. J. Czaplewski who proposes having a pre-prepared electronic list of comments that can then be applied to any paper in whatever combination is appropriate. This reminds me vaguely of an unpleasant experience I had as a teaching assistant (one of many for an online course), being told we were spending too much time grading weekly assignments and we should just copy and paste responses from a pre-prepared list. In this case, the proposed solution is significantly more advanced, proposing a growing and comprehensive list of comments given to students repeatedly that should be personalized that I could see being very worthwhile once the initial time was put in to develop the process.
There are similar alternatives to this available in electronic grading software. One example with which I am somewhat familiar is the GradeMark aspect of Turnitin.com. It does provide pre-prepared error comments which are editable and allows you to save new comments for repeated use. This would definitely save some time and help standardize grading to some level.
And yet. If the overall outcome is to help students learn, are any of these strategies really supporting student learning, or are they just based on the expectation that many students do not bother to read comments as carefully as they should (or at all)? How can we be more efficient without losing the entire point of providing appropriate feedback and grading? The point is still to help students learn. (And when it isn’t, it should be!)
So what do you think? How do you come down on making grading more efficient while still benefitting the student?