Image

To AI Or Not To AI, That Is The Question

A lot can change in no time at all.

By Mekita Rivas (’12)

Mat Waite (right) helping a student with a drone.That’s what Matt Waite (’97) realized late last year. As the fall 2022 semester neared its end, a little something called ChatGPT burst onto the scene. Seemingly overnight, the artificial intelligence (AI) chatbot became the topic du jour. 

Waite, a professor of practice at the College of Journalism and Mass Communications, began experimenting with the tool. As someone who teaches reporting and digital product development, ChatGPT exists exactly at that nexus point of his interests. He was curious about its capabilities; could something like this potentially expand the footprint of a single reporter in an era when fewer and fewer of them exist? 

“So, I’m immediately thinking that way,” Waite said. “I went and tried it myself and was blown away.” 

For Waite, it became “pretty obvious right away” that this technology had plenty of possible applications. Some are more glaring than others. “One of them was students could cheat,” he said. “And cheat exactly on the types of tests and prompts I was giving them.” 

After some tinkering around, he decided to plug a question from a test he had recently given into ChatGPT. The end result? A response written solidly enough that it could very well pass for a student’s real response. 

“Poof, in a second and a half, a flawless answer to the question was sitting right there,” Waite said. “I realized the whole ballgame had just changed.” 

“There were a lot of jaws that hit the floor and a few professors (during the experiment) that just didn’t believe the tests they give could possibly be beaten by this AI.” —Matt Waite

OpenAI is the Silicon Valley developer behind ChatGPT, which went public on Nov. 30, 2022. In the less-than-a-year since, it’s become synonymous with the dawn of a new age in AI. Though AI tools have been around for quite some time — think of the popularization of voice-powered assistants like Apple’s Siri and Amazon’s Alexa — AI-driven natural language processing tools such as ChatGPT and Google’s Bard have the potential to upend just about everything. 

In a January 2023 faculty meeting, Waite demonstrated for his journalism college colleagues exactly what he had done that day of his test experiment. This time, he instructed ChatGPT to respond to the same question but in a specific word count of 200 to 250 words.

“There were a lot of jaws that hit the floor,” he said. “And there were a few people in there that just didn’t believe the tests they give could possibly be beaten by this AI.” 

That mixed-bag reaction among faculty speaks to the general uncertainty associated with all things AI, especially in the world of higher education. On the one hand, it could simplify menial tasks. If a journalism student is struggling to write headlines, they could use a tool like ChatGPT or Bard for thought-starters. On the other hand, it could compromise how classes are taught and administered altogether. 

“Some of us with a little more gray in our hair have joked about going back to blue books,” Waite said. “I sort of half-joke with students to knock it off if you’re doing this. Because there’s gonna be a day where you’re going to handwrite a test in a blue book in class with no technology allowed.”

Andie BarefieldThe rate at which AI technology is evolving further complicates the policymaking process. To date, the university has taken a “neutral stance” on the issue, according to Andie Barefield, director of student conduct and community standards. It’s up to each individual faculty member to determine what constitutes academic misconduct and whether or not that includes the use of AI tools like ChatGPT.

And what happens when students must navigate differing views from multiple instructors at a time? “They don’t have a choice to say, ‘Well, I like this faculty’s approach so I’m just going to do whatever they said to do and all my other faculty will just have to adjust,’ ” Barefield said. “A lot of students don’t understand that. That’s not how academia works.” 

Some students know they shouldn’t be using ChatGPT for certain applications, while others are surprised to learn that artificial intelligence powers some of the tools they’re already using. Take, for instance, the cloud-based typing assistant Grammarly, which reviews spelling, grammar, punctuation and common mistakes. 

“A couple of years ago, Grammarly would help you with your tone,” Barefield said. “There wasn’t an AI there. It was really just using its understanding of English to help you reword or rework long sentences, run-on sentences and sentences that weren’t sentences.”

But now it has artificial intelligence included. 

“There’s the other camp that thinks this is academic integrity and they shouldn’t use this at all,” Barefield said. “And so, if a student is using Grammarly without the permission or knowledge of their faculty, then are they cheating, per (that) camp’s approach or thought process.” 

At present, campus policy encourages faculty to meet with the student before they refer an allegation of misconduct to the office of Student Conduct & Community Standards. In that conversation, an instructor might recognize a miscommunication in what kind of AI use is allowed and not allowed. 

“It’s, ‘Let’s work on how to fix this. I’m not going to consider that to be academic misconduct. How do I help you educationally?’ ” Barefield said. “And then sometimes it’s, ‘No, it’s very clear what you did was an attempt to cheat.’ ”

Having these conversations is crucial for establishing where educators stand on the issue. If AI isn’t addressed outright, it can lead to confusion and a lack of clarity in the classroom. 

“What are the expectations of you using it by your faculty member?” Barefield said. “We see this in computer science classes where faculty has an expectation that you learn to write code, not that you Google how to write code. I think a lot of students think, ‘Well, if I found the answer, does it matter if I copy and pasted it?’ ” 

AI presents an added layer to consider in the age-old conundrum of how to confront and counteract plagiarism in education. And since the tech is evolving with lightning speed, establishing set rules and regulations is easier said than done. 

“The advancements we’ve seen in artificial intelligence since December 2022 are almost unprecedented when you think about it in comparison to previous advancements,” Barefield said. “How many programs and applications are looking for how to enhance and utilize artificial intelligence in their work? It’s almost every day that we have to think, ‘Are we sure that’s not AI?’ ” 

In the future, she’d like to plan more proactive interactions with students about AI and academic integrity through in-person visits and information sessions.

“Our hope is we can more visibly go into classrooms and share information directly with students before they are referred to our office,” Barefield said. “There’s this gap where we’re trying to catch up to where that technology is, but also trying to figure out how we incorporate it or not incorporate it into the work they’re doing in college. Our students aren’t utilizing artificial intelligence more than other campuses, but it is definitely advancing at an astronomical rate.”

From the vantage point of Allie Reynolds, a junior sociology major, AI is helpful for certain tasks like finding sources online and kickstarting brainstorming sessions. But as with any platform, it has the potential to be misused.

“AI can be a tool that students use to get through school without retaining or learning any information on their own,” Reynolds said. “It is very easy to get too much information from AI. It ends up doing a majority of the work for some students.” 

Allie ReynoldsReynolds initially heard about ChatGPT and AI for schoolwork long after her peers. It was around finals last May. At that point, it had become an unavoidable topic on campus. She “completely understood” the sudden popularity of AI among her peers.

“It greatly benefits students in terms of learning in areas they didn’t feel confident with,” Reynolds said. “I knew many people that used AI to help understand materials and obtain further knowledge about the subjects they were learning in class.”

Judging by AI’s growing use in student circles, it’s clear there’s no going back to the way things were. Educators should learn how to navigate its existence instead of hoping it will vanish.

“Honestly, that’s one of the conversations that we’re having: You can’t bury your head in the sand,” Professor Waite said. “That’s not real life. The reality is this tool does exist, and employers are going to have them using large language models to do things.” 

Teaching students how to use AI to make themselves better and improve their work is possible, it will just take the right mix of patience and open-mindedness. 

“Learning the technology and learning what they do well and what they don’t do well is part and parcel of that,” Waite said. “I’m trying to be a realist about it. I know students are going to use it.” 

His main focus, then, is to be upfront and talk about AI out in the open. 

“This is a good use, this is a bad use, this is making yourself a smarter, better student, this is cheating,” Waite said. “And if it makes you a smarter, better student, great. I’m happy — go wild.”

This fall, Waite is teaching a special topics class about AI in journalism. At the time of this interview, he wasn’t sure what the course material would specifically entail. With subject matter that’s shifting by the second, it’s challenging to finalize syllabi.

The overall objective is to experiment with large language models to see if students can produce journalistically-relevant material. “One thing I want to do is train an AI voice model to sound exactly like one of our broadcasting professors, Rick Alloway,” Waite said.

What most parties seem to agree on is that ignoring the elephant in the room is not the approach to take. Educators and students alike know AI is there, so pretending to look the other way isn’t fooling anybody.

“This is something that will not go away and professors will not be able to ban AI use,” Reynolds said. “Being open to technological advances and allowing students to use AI in a helpful way is the best approach.”

As with any new technology, there’s the question of who gets to use what. While the basic version of ChatGPT is currently free, that may not always be the case. Similar AI-powered platforms could go the subscription route or increase prices outright. 

“Access to this kind of technology is unequal,” said Nick Monk, director of the Center for Transformative Teaching and courtesy professor of English. “This tends to be available to people who are in positions of privilege. There are always those structural imbalances when it comes to new technology.”

The Center for Transformative Teaching produced a collection of resources to assist faculty as they work to better understand and grasp AI tools in the classroom. These resources, which are available online at the center’s website, include written examples of AI policies for course syllabi, sample AI interactions, classroom implications, and more. 

“What we need to do is find creative, intelligent and helpful ways to use it to help our students learn better,” Monk said. “Instructors need to change the way they assess student learning. That’s where we are and that’s the advice we’re giving.”

The center encourages faculty to — at a minimum — familiarize themselves with AI in the classroom. Taking that first step is essential for demystifying the tech and dialing back broader concerns.

“For some people, they’re really worried about this and they think that it could literally destroy us,” Monk said. “It could certainly overwhelm higher education. I’ve heard mostly that people are just kind of strapped in and they’re waiting to see what happens next.”