Making the Case for AI-Assisted Grading

Carolyn Stoll, Director of Online Instruction, University of Cincinnati

Carolyn Stoll, Director of Online Instruction, University of Cincinnati

Among the many polarizing debates surrounding the use of AI in higher education, one issue seems to generate a great deal of controversy: whether to use AI in grading student work. Some educators, me included, have used tools like ChatGPT to augment the grading of student work. However, the response to this practice is often critical, suggesting that using AI to grade is a dereliction of one’s duty as a teacher, a short cut to speed up grading at the expense of quality and connection. The reasoning typically follows that students pay tuition to learn from human experts in their field, not autonomous chat bots.

If the debate was just about replacing human graders with GPT agents, then I would agree that using AI to grade is problematic - lazy at best and academically dishonest at worst. But framing the issue as a binary choice between a human grader or an AI agent ignores the educational benefits AI can offer when thoughtfully integrated into the grading process. The real issue isn’t about replacement, but augmentation. Can an instructor aided by AI provide better assessment feedback to students? Based on my experience, the answer is yes.

When AI supports rather than replaces human graders, the student can benefit in several ways. First, while I still read every word of my students’ work, AI can quickly generate a substantial volume of detailed feedback. High quality feedback acknowledges strengths and areas for improvement, suggest specific and actionable recommendations for how to improve.

“Properly implemented, AI assisted grading does not necessarily speed up grading; it improves it. Thoughtful integration, careful risk management, and transparency represent a balanced middle path towards the goal of better student learning”

For instance, I teach writing, and ChatGPT can swiftly suggest clearer, more concise alternatives to wordy, awkward sentences. I adapt the AI feedback to my voice and instructional focus the student needs, and the result is richer, more targeted feedback than I could provide alone in the same timeframe. Second, students benefit from the consistency of AI augmented feedback. Particularly in writing courses, consistency in tone, structure, and thoroughness turns assessment into a learning opportunity for the student and not just a report of how they did. I ask the AI agent to follow specific formats and standards consistently across all assignments, so grading becomes more predictable and equitable for all students.

Finally, partnering a human grader with an AI agent can increase objectivity and fairness. The AI agent evaluates student work strictly according to predefined criteria, unaffected by human biases. It doesn’t get cranky or tired, nor does it get irritated by that student who has been chronically late on assignments or snarky in their emails to me. It just applies the criteria to the work I feed into it and responds as directed in the prompt.

This is not to say AI assisted grading comes with no concerns. Students have a right to privacy and to control over their own intellectual property, for instance. However, universities routinely use third-party tools as part of assessment, from plagiarism checkers to publisher-provided quizzes integrated into learning management systems. Thus, the privacy and intellectual property concerns posed by AI grading are not entirely new or unique and can be mitigated just as we mitigate concerns with other tools. The key is to take measures to protect students, including:

• Anonymizing student work fed into the AI agent

• Employing internally hosted or privacy focused LLMs

• Transparently communicating with students about using AI agents to grade

• Allowing students to opt out of AI generated feedback

The debate about grading with AI should not revolve around choosing between only human graders or only AI graders. Rather, the objective should focus on enhancing feedback quality to maximize student learning outcomes.

Properly implemented, AI assisted grading does not necessarily speed up grading; it improves it. Thoughtful integration, careful risk management, and transparency represent a balanced middle path toward the goal of better student learning.

Weekly Brief

Read Also

Empowering Leadership through Innovation in Higher Education

Empowering Leadership through Innovation in Higher Education

Brian Fodrey, Assistant Vice President, Business Innovation, Carnegie Mellon University
The New Era of Education

The New Era of Education

Yrjö Ojasaar, Investment Partner, Change Ventures
Redefining Readiness: A Path Toward a Technology-Agnostic Future

Redefining Readiness: A Path Toward a Technology-Agnostic Future

Fatma Elshobokshy, Director of the Center for the Advancement of Learning (CAL), University of the District of Columbia
The Indispensable Role of Emotional Intelligence in K-12 Technology Leadership

The Indispensable Role of Emotional Intelligence in K-12 Technology Leadership

Steve Richardson, Director of Information Technology, Homewood-Flossmoor High School
Pioneering STEM Education for a Future of Innovators

Pioneering STEM Education for a Future of Innovators

Jay Jessen, Director of the Marburger STEM Center, Lawrence Technological University (LTU)
Tools over Solutions

Tools over Solutions

Gary Natriello, Professor of Sociology and Education & Ruth L. Gottesman, Chair in Educational Research, Teachers College Columbia University