grades
I stopped assigning papers longer than a page or two about four semesters ago because I wasn't sure how to engage in the arms race with generative AI. I think writing longer papers is valuable, but I had to figure out what I thought was valuable about it. It certainly wasn't that I needed to read the papers, though I did learn from them, and some were genuinely interesting. I'd already mostly switched from assigning topics that had answers I already knew to assigning topics where I wasn't sure what the answers were, so I didn't even dread reading them the way I did when I had to read multiple versions of the same extended thought.
My goals in a class are to get students to understand the material and to evaluate whether they've understood it, or at least tried. Writing papers was a way of doing both: the writing was itself a way to come to understand the material, and then my reading the paper at the end was a way to evaluate how well they understood.
It was imperfect on both ends. Students who could write clearly and formulaically probably didn't need to develop their understanding much when writing, so writing wasn't helping them understand all that much more than they did already, whereas students who couldn't write well might have been struggling with their writing as much as with their understanding. (The two go together, but maybe not as well as those of us who write regularly think.)
Conversely, students who wrote poorly would get worse grades regardless of how well they understood the material, and students who wrote well would get better grades. I've read many formulaic papers that were easy A's to give, but I didn't think the person had much understanding; there just wasn't a way to justify a worse grade when the assignment was to write a clear paper, which they had done, and not to understand the material well, which they seemed not to have done.
In fact, one odd fact about philosophy that has become more important to me the longer I teach is that philosophical conclusions and arguments are often extremely simple to present, but can take years to understand. I've been explaining Kant's Groundwork of the Metaphysics of Morals every year for a decade, and I understand it better each semester after rereading it, but my presentation of it barely changes, and only at the margins or maybe in the way that I answer an especially perceptive question. But the simple paper I would write in response to a prompt on Kant's groundwork wouldn't be much different now than it would have been a decade ago, and in both cases it would have been a solid A-/B+. (I was not one of the students who could write clear papers easily; now, I can write either easily or clearly, but still not both, and I'm envious of my students who can, even if I suspect it might keep them from coming to realize what they don't understand.)
Unlike the clear, easy-A papers, there were also always tortured papers that showed, sometimes only at the closest inspection and disentangling sentence mashups, that the person was really trying to understand, even if the paper itself wasn't clear. I was sympathetic to those--I suspect I wrote them for years--but I could never be sure if I was reading into the paper an understanding that wasn't there. And, while I could be sympathetic, I could only give so good of a grade to a paper that clearly wasn't good. The assignment, after all, was to write a paper, not to understand. Understanding was my goal, but not the assignment.
What generative AI has done is make it too easy to write a clear, empty paper where the person didn't have to have any understanding in order to write something. Previously, I might have given that kind of a paper a B- because I would have assumed that, while it wasn't specific enough and might even have made some careless mistakes here and there, at least the person increased their understanding enough that they got to the point of writing a readable, if vague, paper on the assigned topic. Now, a readable but vague paper on an assigned topic might deserve a B-, or it might deserve an F and to be referred to the honor council for cheating. I don't know, and I don't want to engage it, so I can't assign a paper.
Or, rather, I can't assign longish papers for grades, at least not like I used to. I can still assign papers, but short ones with enough personal engagement that I assume that anyone who would prefer to cheat on it instead of writing is so uninterested in higher education that I'm going to ignore them as outliers. If my paper topic is a reflection on Aristotle that asks you to think about whether, on his understanding of the purpose of life, you can live a good life, and the paper isn't so long that you'll have to devote many hours to writing it, then I use my old defaults when reading them. Maybe they'll use generative AI, but maybe they'll use other resources, too, and it seems most likely that they'll end by understanding the material better however they get there.
I've been thinking about this because I've discussed the book Discredited: The UNC Scandal and College Athletics' Amateur Ideal by Andy Thomason a couple of times recently with student book groups. Even though we're at UNC's rival and this scandal was only a few years ago, my students haven't heard of it. I could write another post on the book, but the core of the scandal was that students were taking courses that weren't "real" courses, in which they only had to write a paper, and then they were not even writing those papers, or if they were writing, then the papers were being rewritten, edited, or graded so easily that the writing didn't matter. And the question for my students, at least as it applies to non-athletes, is how are things different here and now, for students generally.