Enhancing Faculty Feedback with Generative AI for Practical, Intentional Use

Jean Mandernach, Executive Director, Center for Innovation in Research and Teaching, Grand Canyon University

Jean Mandernach, Executive Director, Center for Innovation in Research and Teaching, Grand Canyon University

Through this interview, Mandernach highlights how generative AI can enhance the efficiency and quality of faculty feedback without compromising educational integrity. She emphasizes intentional, ethical use of AI that supports rather than replaces the educator's role.

Providing meaningful feedback on student writing is one of the most time-intensive and cognitively demanding aspects of teaching, often cited by faculty as a leading contributor to instructional workload (Carless & Boud, 2018; Hattie & Timperley, 2007). In online education, this burden is even more pronounced—faculty spend approximately 41 percent of their instructional hours on grading and feedback, exceeding the time allocated to content creation or student interaction (Mandernach, Hudson, & Wise, 2013). For a typical online course with just over 20 students, this translates to roughly five hours per week devoted solely to evaluating assignments and composing written feedback (Mandernach & Holbeck, 2016).

As instructional demands intensify, faculty are increasingly exploring tools that can improve feedback efficiency without compromising educational integrity. Generative artificial intelligence (AI) presents a compelling option. When used intentionally, AI can support faculty by streamlining the feedback process while maintaining—and in some cases enhancing—the quality, clarity and consistency of their responses. Ultimately, the goal is to amplify instructional impact, not replace it.

Key Considerations for Using Generative AI in Feedback

While generative AI holds significant promise for improving the efficiency and quality of feedback, its use must be guided by intentionality and a clear commitment to pedagogical integrity. Faculty should view AI not as a substitute for their expertise, but as a tool that enhances their ability to deliver timely, consistent and meaningful feedback. I find that AI-assisted grading systems are most effective when it reinforces the educator’s role rather than diminish it.

“Mandernach finds that AI-assisted grading systems are most effective when it reinforces the educator’s role rather than diminish it.”

This demands that faculty exercise professional judgment—carefully reviewing and refining AI-generated suggestions to ensure they align with course objectives and disciplinary expectations. Transparency is equally important: students should be informed when AI has been used to assist with feedback and reassured that instructors retain full responsibility for evaluation. AI-generated comments must also support specific learning goals rather than rely on generic or vague phrasing that lacks instructional value. Faculty must prioritize data privacy, particularly when using general-purpose tools that may not comply with Family Educational Rights and Privacy Act (FERPA) or institutional policies and should take steps such as anonymizing student work when appropriate. Given the potential for bias in AI systems, instructors should remain attentive to how feedback may be perceived by students from diverse backgrounds and revise accordingly to promote fairness and inclusion. In this context, responsible AI use not only supports student learning but also models digital literacy and ethical engagement with emerging technologies.

Practical Strategies for Implementing AI in Feedback Workflows

Building on these principles, faculty can explore a variety of practical strategies for integrating AI into their feedback workflows in ways that uphold both instructional quality and ethical standards. Rather than treating AI as a one-size-fits-all solution, instructors can adopt targeted techniques that complement their existing grading processes—ranging from capturing real-time impressions while reading a paper to composing final comments and refining tone. These strategies can reduce repetitive tasks, increase consistency and promote feedback that is aligned with learning outcomes, fosters student growth and supports inclusive communication.

During the review of student work, AI can streamline the process of capturing initial feedback. One useful approach is to use a voice recording feature to narrate comments while reading. These spoken reactions—whether affirming, critical or inquisitive—can then be transcribed, organized and polished by AI into a cohesive paragraph, allowing faculty to stay focused on content rather than switching between thinking and typing. Alternatively, faculty can highlight specific text and prompt the AI to refine inline comments, such as clarifying why a thesis lacks clarity or softening a critique. A “tag and summarize” technique is another helpful option: instructors add short margin notes like “[unclear logic]” or “[strong evidence]” during review, then instruct the AI to generate a summary comment based on those tags.

When composing formal feedback, AI can assist in aligning comments with rubrics. Faculty can input rubric criteria and brief notes to generate structured feedback that ensures comprehensive coverage across grading categories. Similarly, instructors can provide shorthand observations (e.g., “strong intro, weak evidence, lacks citations”) and prompt the AI to expand these into a three-part structure that outlines strengths, weaknesses and next steps. For batch grading, AI can also generate reusable feedback blocks based on common trends observed across multiple submissions, which instructors can then tailor to individual students.

AI also enhances the quality and tone of feedback. It can serve as a revision tool to adjust phrasing, making critical feedback sound more supportive and student-friendly. Faculty can prompt the AI to identify where comments may seem overly harsh, ambiguous or unintentionally biased and then refine the language accordingly. For multilingual students, AI can translate key feedback into their preferred language while preserving the original for consistency and clarity.

By integrating these methods, faculty can reduce time spent on repetitive tasks, improve the consistency of feedback and increase its clarity and instructional value. These practices not only alleviate workload but also contribute to a more responsive and student-centered learning environment.

Moving Forward with Intention

To move from possibility to practice, faculty must intentionally carve out space to experiment with generative AI in low-stakes contexts—testing tools, refining prompts and evaluating outcomes with a critical eye on both instructional effectiveness and ethical responsibility. The goal is not to adopt AI wholesale, but to integrate it into existing workflows in a way that preserves academic rigor and fosters human connection. Faculty might start by applying AI to a single assignment or feedback task, then adjust their approach based on what proves most useful for their students and discipline. Institutions, in turn, should support this process by offering clear policies, relevant training and collaborative spaces for faculty to share insights and strategies. The value of generative AI is not merely in saving time, but in creating the capacity to reinvest that time into deeper teaching, stronger student relationships and more equitable, impactful learning experiences.

Weekly Brief

Read Also

Preparing Every Classroom for Career Success

Preparing Every Classroom for Career Success

Jarrad Grandy, Executive Director of Student, Oakland Schools
Navigating Through Cybersecurity in the AI Era

Navigating Through Cybersecurity in the AI Era

Dennis Guillette, Director and Security Architect, University of South Florida
Digitalizing Education

Digitalizing Education

Eva Harvell, Director of Technology, Pascagoula-Gautier School District
Transforming Education Through Technology Leadership

Transforming Education Through Technology Leadership

Hector Hernandez, Director of Technology Operations, Aspire Public Schools
Social Impact and Artificial Intelligence: Understanding Indirect Measures

Social Impact and Artificial Intelligence: Understanding Indirect Measures

Kent Seaver, Director of Academic Operations, the University of Texas, Dallas
Building Smarter Infrastructure through Faculty Partnership

Building Smarter Infrastructure through Faculty Partnership

Brad Shook, Senior VP of Technology and Operations, the University of Texas Permian Basin