Why Higher Education Should Fully Embrace ChatGPT

Carolyn Stoll, Director of Online Instruction, University of Cincinnati

Carolyn Stoll, Director of Online Instruction, University of Cincinnati

This morning, I read yet another article about ChatGPT. This one, appearing in The Chronicle of Higher Education, begins with the headline, ‘GPT-4 Can Already Pass Freshman Year at Harvard.’ The writer of the piece admits that she was at first a proponent of embracing ChatGPT but now feels that university educators must do all they can to prevent its use. To that, I say, 'Good luck.’

In case you’ve been on an island for the past 8 or 9 months and aren’t familiar with ChatGPT, GPT-4, or Bard (Google’s version of ChatGPT), these are what are called large language models (LLMs) capable of creating original text based on synthesized content found on the Internet. Plagiarism detectors are not capable of reliably detecting text created by LLMs. You can see the problem for Higher Ed immediately, I’m sure.

Since ChatGPT burst on the scene last fall, the list of similar tools has proliferated for students and teachers alike. In addition to generating entire essays, students can use AI to summarize research and help paraphrase text. Instructors can ask AI tools to craft lesson plans and develop presentations. And while there are AI tools out there that can provide answers to students on virtually any subject, the one that’s gotten all the press and the one Higher Ed seems most worried about is ChatGPT.

The panicked reaction is understandable. As the writer of the Chronicle article points out, just about any take-home writing assignment can now be completed entirely by ChatGPT with little to no chance of detection. The implication for online learning is even more enormous since students can’t be easily watched while they’re writing to make sure they don’t misuse an AI assistant. There’s no digital equivalent to the “Blue Book.” It doesn’t help that the wider culture is also losing its mind over ChatGPT. People in nearly every industry are expressing anxiety that ChatGPT will take over their jobs.

But Higher education institutions must show intellectual leadership and run counter to the culture by embracing new technology like ChatGPT. Forget for the moment that the AI genie is not going back into the bottle. Higher Ed also has the responsibility to teach students to use new technology ethically and responsibly. AI will be their future. It is our job to prepare them for that future.

‘Higher Ed must rise above the frenzied moral panic of now and look to the future promise of then, and help our students find their way there.’

For instance, as a writing teacher, I see enormous potential in using ChatGPT to teach composition. I’ve seen writers at all levels, from undergrad to grad, struggle with form, organization, syntax, and smooth integration of outside ideas. Incorporating ChatGPT as a teaching tool might help students overcome these problems. Students can use ChatGPT as a starting point and build their own voice into their text with each revision, documenting the versions of the evolving text as they go. Intimidated beginning writers can gain a lot of confidence this way, allowing their own voices to emerge and be heard while at the same time mastering the fundamental techniques of clear prose.

As a real-life example, the words you are reading now are mine, but the outline of this article began by asking ChatGPT and Google’s Bard to create outlines for a piece arguing for Higher Ed to embrace LLMs. After massaging the prompts for both tools a few times, I settled on ChatGPT as the better option, took the resulting outline, and began to work my voice and ideas into it. Once I was done, my outline bore little resemblance to the AI-generated version, but ChatGPT had given me a place to start and ideas to grapple with and consider, a necessary part of any writing task. What wasn’t necessary was the time to brainstorm all those starting ideas myself.

I see no reason not to teach students to do the same. In fact, I think we owe it to them to do so. We need to be honest with students and admit that ChatGPT has practical uses that save them time and get them from Point A to Point B faster. But they still need to make the trip. Have real conversations with students and admit they could let the AI do the work for them, call it a day, and get away with it. But they are doing themselves no favors by doing so. In fact, they may find that the same ChatGPT that was their best friend in college will eat their lunch when they graduate. They need to be prepared for a post-college world where AI will radically transform whatever profession they choose and may even eliminate the job they thought they’d have.

We should also be willing to discuss with students the ethical concerns surrounding LLMs. These go beyond the obvious academic integrity concerns to include copyright issues and problems with bias in AI responses. But far from being reasons to avoid these tools, these are reasons to engage with them more. ChatGPT is not inherently biased. It’s just scooping up our content on the Internet and feeding it back to us in boxy, formulaic, but grammatically correct prose. Blaming ChatGPT for bias in its responses is like blaming the mirror for that ugly spot on your dress. The mirror can’t fix the spot; only we can. And we’ll need human voices and the better angels of our nature to fix the ugly spots in AI.

This is something our students will have to tackle, but they can’t if we in Higher Ed treat ChatGPT as an existential enemy. We have a moral imperative to teach our students to meet the challenge and opportunities that AI will present. Like the calculator, computer, and smartphone, ChatGPT, while unique and disruptive now, will eventually settle into being a tool that people use to be more human, to solve problems, to make our lives better. Higher Ed must rise above the frenzied moral panic of now and look to the future promise of then, and help our students find their way there.

 

Weekly Brief

Read Also

Preparing Every Classroom for Career Success

Preparing Every Classroom for Career Success

Jarrad Grandy, Executive Director of Student, Oakland Schools
Navigating Through Cybersecurity in the AI Era

Navigating Through Cybersecurity in the AI Era

Dennis Guillette, Director and Security Architect, University of South Florida
Digitalizing Education

Digitalizing Education

Eva Harvell, Director of Technology, Pascagoula-Gautier School District
Transforming Education Through Technology Leadership

Transforming Education Through Technology Leadership

Hector Hernandez, Director of Technology Operations, Aspire Public Schools
Social Impact and Artificial Intelligence: Understanding Indirect Measures

Social Impact and Artificial Intelligence: Understanding Indirect Measures

Kent Seaver, Director of Academic Operations, the University of Texas, Dallas
Building Smarter Infrastructure through Faculty Partnership

Building Smarter Infrastructure through Faculty Partnership

Brad Shook, Senior VP of Technology and Operations, the University of Texas Permian Basin