Where is the Human in AI in the College Classroom?

Sandra Guzman Foster, Inaugural Dean of the Center for Teaching and Learning Innovation, Moravian University

Sandra Guzman Foster, Inaugural Dean of the Center for Teaching and Learning Innovation, Moravian University

Through this article, Guzman Foster emphasizes on the importance of integrating AI into higher education to preserve and enhance human-centered learning, fostering critical thinking and student agency to prepare learners for ethical and meaningful engagement in a tech-driven world.

On most days we are seeing several headlines about Artificial Intelligence (AI). We are seeing debates over ChatGPT in the classroom to predictions about how AI will change the future of work and learning. Let’s face it, AI is everywhere, including in higher education. On any given day, you can scroll through newsfeeds and most likely you will come across articles about not only the impact of AI in higher education classrooms but also how it is fundamentally changing the higher education landscape. A common question that seems to be on everyone’s mind is, “Where is the human element of learning—the behavior, reflection and metacognition that are fundamental to the learning process?” It is a question that many educators continue to grapple with and worry about losing the human aspect of making education meaningful. The behavioral, reflective and metacognitive aspects of learning are at the core of how students learn, think and grow. Isn’t this what we want for our students at our institutions of higher education?

AI continues to change and grow at a rapid pace. What we don’t do is surrender the learning process and rely on AI for everything. Instead, we adapt and reframe how we teach while continuing to make sure we are upholding academic integrity and student agency. A good starting point is to design effective frameworks that emphasize transparency. This means not only modeling ethical use of AI but also setting clear expectations that require students to document their use of AI with their assignments and creating guidelines to clarify when it is okay to use AI in your classroom. Additionally, it is critical to update academic integrity policies as technology advances.

While transparency is the starting point, educators should consider designing assignments that go beyond prompting an AI bot for an output. In fact, assignments should push students to think critically, monitor their learning process and reflect on the decisions and outcomes. The goal is to ensure that students do not just rely on AI outputs but actively engage in problem-solving and critical thinking, maintaining student agency along with the centrality of human cognitive processes and learning agency.

“A good starting point is to design effective frameworks that emphasize transparency. This means not only modeling ethical use of AI but also setting clear expectations that require students to document their use of AI with their assignments and creating guidelines to clarify when it is okay to use AI in your classroom”

For example, educators can design assignments where students are asked to critically evaluate multiple sources, identify biases and explain how they arrived at their conclusions. Take it even further and have students reflect on how their thinking changed from point A to point B to point C and what role AI played in shaping their thinking. These are just two examples of how to keep the human in the learning process.

In an online classroom, educators must rethink discussion boards. Ask students to share their thought processes versus the surface-level prompts. For example, “How did you come to this conclusion? What challenges did you face?” Additionally, incorporate metacognitive prompts such as “What did you learn about your own thinking while working through the process of your initial post? These provide opportunities for reflection at a deeper level and the human aspects of the learning process. If you are requiring students to complete writing assignments, scaffold the process. Break the assignment into parts (topic, outline, drafts, peer reviews, etc.) so learning can be tracked over time. By doing this, you are encouraging students to stay engaged with their own ideas and not just rely on AI outputs. Furthermore, annotation tools like Hypothes.is provide a platform for collaboration as well as support self-regulated learning. These kinds of tools provide an opportunity for students to engage critically and creatively with content.

Having conversations in your class about AI and including statements in your syllabi is also essential. Ask students what they think about AI and how they are using it. Bringing it to the forefront and normalizing talk about AI lets students know that you value their learning practices. As a result, they are more likely to approach AI thoughtfully and ethically because you are bringing it into your classroom, a space you both share.

Banning AI is not the answer. In fact, it is a disservice to students who will take what they learn from higher education to the real world, including the workforce, where we are seeing AI fully implemented. It is our responsibility to prepare our students for the real world. Rather than producing passive recipients of machine-generated answers, let’s make sure our students are leaving higher education equipped to thoughtfully engage with AI to think critically, problem-solve, self-regulate and reflect. Not only do these skills define deep learning, but they also encompass human thinking, which is so important, especially in today’s world. The time is now. We have the opportunity and a responsibility as educators to center humanity in everything we do in our college classrooms. By doing this, we do our best to ensure our graduates leave our institutions ready to utilize AI ethically and with integrity.

Weekly Brief

Read Also

Preparing Every Classroom for Career Success

Preparing Every Classroom for Career Success

Jarrad Grandy, Executive Director of Student, Oakland Schools
Navigating Through Cybersecurity in the AI Era

Navigating Through Cybersecurity in the AI Era

Dennis Guillette, Director and Security Architect, University of South Florida
Digitalizing Education

Digitalizing Education

Eva Harvell, Director of Technology, Pascagoula-Gautier School District
Transforming Education Through Technology Leadership

Transforming Education Through Technology Leadership

Hector Hernandez, Director of Technology Operations, Aspire Public Schools
Social Impact and Artificial Intelligence: Understanding Indirect Measures

Social Impact and Artificial Intelligence: Understanding Indirect Measures

Kent Seaver, Director of Academic Operations, the University of Texas, Dallas
Building Smarter Infrastructure through Faculty Partnership

Building Smarter Infrastructure through Faculty Partnership

Brad Shook, Senior VP of Technology and Operations, the University of Texas Permian Basin