April 26, 2024
Interrogating AI
With price tags reaching as high as $100 million dollars, forged paintings can be tough to spot. It often takes forensic analysis to expose a clever fake.
Richard Taylor, a physics professor in the College of Arts and Sciences, has spent the past 3 years investigating whether artificial intelligence can do it more reliably. His research team fed hundreds of abstract art images into a neural network and discovered AI can uncover fakes with shocking accuracy.
“Our computer can spot a fake far more accurately than a human. Is that a form of artistic appreciation?” Taylor says. “In a way, AI does appreciate an abstract painting. I think it’s fantastic.”
Since the launch of ChatGPT in 2022, public awareness of artificial intelligence has exploded, accelerating the technology’s inevitable creep into everyday life. In the University of Oregon College of Arts and Sciences, where AI has made its way into both classrooms and research labs, faculty members are grappling with its impact on student learning even as they explore its vast potential in their research.
"A lot of people here feel it’s a double-edged sword,” says Ramón Alvarado, assistant professor of philosophy who specializes in data ethics, a key component of the UO's data science major. “AI has the opportunity to be as giving a technology as it is a worrisome technology. We’re a little bit past the hype and the panic, and now we’re thinking about how we can help our students with this.”
Alvarado leads a group of faculty members who meet regularly to explore the implications of AI for teaching and learning in higher education as part of the Communities Accelerating the Impact of Teaching (CAIT) program. He was surprised to discover that despite their initial hesitancy toward AI, 40% of the group’s members were already implementing the technology in their classrooms, from teaching students to code with Microsoft Copilot AI to incorporating generative AI writing assignments into humanities courses.
Yet even for the early adopters, AI continues to raise nearly as many questions as answers, causing many educators to rethink how they approach their assignments, what they’re teaching students, how they assess learning and, in some cases, what the role of their entire academic discipline should be in this new AI-driven world.
"I’m more worried about the apprenticeship part. The steps you take to learn something are what lead to the learning,” Alvarado says.
If a machine can do those steps for you, students may not be getting what they pay for and what we as a society need from them. I’m more worried about how AI might be getting in the way of that.
Learning to cheat or cheating to learn?
In her English classroom, where the plagiarizing potential of generative AI tools like ChatGPT arguably pose one of the greatest threats to learning, Lara Bovilsky decided to confront the issue head-on and explore what students might be missing when they rely on AI instead of doing the work themselves.
After analyzing a six-line speech from Shakespeare’s Macbeth, her students had two different chatbots perform a similar analysis. When Bovilsky asked them to evaluate the AI-generated results, however, most students failed to spot the chatbots’ glaring factual errors. Worse, only one student out of 37 noticed that the chatbots made a series of claims unsupported by any evidence or analysis.
“It was shocking,” says the associate professor of English. “It was very clear that there was nothing ChatGPT could do to help them understand how to make arguments.”
In fact, she found, AI seemed to be turning off the very critical thinking skills students need to use the tool wisely—skills that were already underdeveloped due to learning loss during the COVID pandemic. It also failed to achieve her initial goal of preemptively curbing the use of chatbots like ChatGPT to plagiarize assignments. Although students initially avoided cheating, when assignments got harder, many eventually turned to AI despite knowing she would find them out.
“In STEM fields, these machines are enabling calculations that would once have required massive budgets, and for now they’re free,” Bovilsky says. “But in these more basic areas of education, where we’re training people to be able to think critically and express themselves accurately, it’s harmful. Students are relying on it and not learning the skills they need.”
Part of the challenge with generative AI is that it arrived just as students were beginning to face the true extent of their learning loss during the pandemic—a civic crisis in its own right, Bovilsky says, as some educators struggle to teach a growing number of students who need remedial help.
Under mounting pressure to regain their lost ground, more students may feel driven to the point of desperation, Bovilsky explains. And by placing the means to plagiarize freely at students’ fingertips, generative AI has in many ways “democratized” the way students offshore some of their academic work, Alvarado adds.
The resulting flurry of accusations of AI-related misconduct has raised even more questions about how to effectively address these incidents—even prompting some faculty members to try their hand at designing AI-proof assignments, or to ask students for video reports rather than written papers.
In the Department of Philosophy, for example, at least some professors no longer give out written assignments. Instead, students take a brief oral exam at the end of each class.
“Some of us are rethinking and reassessing our relationship to our discipline and how we teach it,” Alvarado adds.
Preparing students for an AI world
Artificial intelligence is raising similar existential questions among professionals in fields beyond academia, where generative AI is becoming increasingly adept at doing what many people previously thought only humans could do.
In the audiovisual media landscape, AI has generated a new ethical thicket with its ability to create seemingly convincing videos known as “deepfakes,” which have recently been used to resurrect a dead dictator to sway Indonesia’s presidential election.
“With all the talk about the threats of AI, I think we should focus more on AI literacy and helping students use AI in a responsible, ethical and creative way,” says Ari Purnama, assistant professor of cinema studies. “In educating a new generation of future filmmakers and creative professionals, I think we ought to leverage AI in a responsible way to help them be competitive. If you don’t have the ability to use AI responsibly, I think you’re going to be at a disadvantage.”
While the UO strongly encourages instructors to set explicit policies about generative AI in their course syllabus, there’s no one-size-fits-all institutional policy that can cover every classroom—largely because AI affects each discipline differently. Its impact is much different in a humanities classroom, where learning to write is a primary focus, versus a science lab, where writing plays a more secondary role, Alvarado says.
“In our CAIT group, we have people who feel students should be encouraged to use AI for guidance, and others who say students absolutely cannot use it,” says Phil Colbert, senior instructor of computer science and director of the computer information technology minor. “It depends on the department.”
According to a February article in the Emerald, the disparity between classroom AI policies across campus can sometimes cause confusion for students, who may go from one class in which AI is banned to another in which its use is encouraged.
English major Elizabeth Burket-Thoene has noticed more professors integrating AI into their assignments this year. While the third-year student is interested in exploring its uses as a tool, she’s also wary of asking chatbots for help on assignments.
“Right now, I feel like I'm a little more careful when asking for ideas and things like that. For me personally, it’s super important to be able to work through things on my own,” she says.
I think it’s definitely something that has to be gone over, class by class, how it can be integrated. I think there won’t be space for it to be completely taken out of instruction.
Despite concerns about cheating, educators are discovering that the technology can be a powerful equalizer for students with anxiety, who struggle to articulate their ideas, or who come from different linguistic and cultural backgrounds. AI also has the potential to provide personalized tutoring and other tools to support academic success.
Reaping these benefits requires students to develop AI literacy, however—something they also will need in their future careers as proficiency with AI tools becomes a must-have skill across nearly every industry.
“From a commercial perspective, when students get out in the field, their bosses or clients will expect them to know how to use these tools,” Colbert says. “As an educational institution, we are figuring out how to train students to use these tools—and if they use them, they’re going to use them to their advantage.”
—Nicole Krueger, BA ’99 (journalism), is a communications coordinator for the College of Arts and Sciences.
The Language of AI
Need help understanding AI terminology? Here are some essential definitions from IBM:
Artificial intelligence: Technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.
Machine learning: A branch of artificial intelligence that uses data and algorithms to enable AI to imitate the way humans learn, gradually improving its accuracy.
Natural language processing: A branch of artificial intelligence that enables computers and digital devices to recognize, understand and generate text and speech.
Generative AI: Deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.
Neural network: A machine learning program that mimics the way biological neurons work together to makes decisions in a manner similar to the human brain.