Questioning the assumption of the linear relationship between (stuff) and evaluation result.
A nonlinear story about this topic. It started with my family getting a dog. Then, while watching people interact with the dog, I started thinking about dogs as a Human Litmus Test. I mean, except for those that have had some dog-related trauma in his or her life, only a non-human can resist the happiness power of a dog. I didn’t mean that you could judge how good a person was by how they interacted with a dog, but I thought you could see if they at least had some hope for being ‘good’. Then, as things often go for me, I thought about teaching and teacher evaluation, and wondered if there was a Teacher Litmus Test, i.e. a simple question or scenario that would tell you if a teacher was on the track to being good (whatever that means), versus needing lots of intervention. Finally, I turned it around to thinking about how we evaluate students and how, whether is it appropriate or not, we treat our assessments as if they measure some sort of linear growth or difference in a student’s abilities. And thus we reach the point of thinking about how our assessments do and can relate to the underlying factor we want to measure.
If you graph the scores you give against some sense of what has been truly learned, then you’d probably get a graph like one of these:
Someone who considers him or herself an easy or generous grader, might choose the blue line; someone who consider him or herself a tough grader, might choose the green line, but since we all think we are fair graders, I suspect most would pick the red line. I’ll consider all of these to represent linear grading.
But, there are many things in our classes that really don’t fit this model. How about attendance? How about prerequisite skills? How about extra-credit or remediation options? These probably look more like these graphs:
The pink graph brings into question if all measures must have positive derivatives. Purple asks if something is earned from just signing up for the class. Orange and blue let us go from caring to not caring or vice-versa (per change in learning). Black reminds us that not everything can be measured continuously (or can be pretended to be measured continuously).
So, what’s the point? We should consider thinking about the underlying assumptions that go into our grading practices and make sure they match the reality of what is going on. We don’t have to stick to linear grading if we think (and know?) that some of the elements of learning are not linear.