No, ChatGPT is not a disaster for higher education.

March 20, 2023

Three UM-Dearborn experts talk about the much-hyped new AI tool, the potential for cheating and why ChatGPT could be a valuable resource for educators and students.

In in a dimly lit room, a person types on a laptop while using the ChatGPT website.
Credit: Rokas via Adobe Stock

Predictions about the impact of ChatGPT, OpenAI’s new chatbot that can respond to text-based prompts with humanlike responses on just about any topic, range from apocalyptic to optimistic. Among the dark and gloomy forecasts, you’ll find interesting reads about how the AI-based technology could kill white collar jobs, replace journalists and even try to break up your relationship. On the other hand, many are excited about the potential for this technology to revolutionize (or replace) search, make programmers more productive and help folks who are English language learners. 

Inside the universe of higher ed, folks are also reckoning with the new technology’s pros and cons. Because ChatGPT is so good at generating coherent text, some fear cheating could go into overdrive. Indeed, there are already reports of students using it to generate text for entire essays, though proving that can be problematic. Meanwhile, curious faculty are already wondering whether embracing ChatGPT for educational purposes might be more realistic — and useful — than fighting it. 

ChatGPT's potential benefits for students and faculty

To get some perspectives on ChatGPT, we talked with three UM-Dearborn experts who’ve been closely following and using the technology. Electrical and Computer Engineering Professor Hafiz Malik, who’s an expert in artificial intelligence, says ChatGPT actually represents a major upgrade of a not-so-new idea. Chatbots, of course, especially for customer service, have been around for years — as have people’s frustrations with them. “Chatbots usually get pretty annoying because you quickly run up against the limited amount of knowledge they’re trained to have conversations about,” Malik says. “The difference with ChatGPT is that it’s been trained on a huge amount of text-based data. Basically a lot of what’s digital, it’s read, so it has knowledge of a vast array of topics and can give you answers that are maybe not always perfect, but are damn good.” For example, when Malik asked ChatGPT to explain the concept of flat-magnitude response, a topic he covers in one of his courses, the explanation was on par with what you’d find in a textbook. For this reason, Malik sees “tremendous” potential for students to use ChatGPT as a quick reference “conceptual dictionary,” especially for smaller, technical topics. That could save them time compared to internet searches or books, making learning more efficient.   

Malik’s colleague Professor Paul Watta is already dipping his toe in the water with another touted use case for ChatGPT. For about a year, Watta has been using a related OpenAI product called GitHub Copilot, which gives users complete programming function suggestions based on just the first line of their coding inputs, similar to how predictive text works on your phone or word processing app. Because ChatGPT has digested GitHub’s massive coding library, it can function like Copilot, though with the advantage of natural language prompts. In fact, when Watta noticed some ChatGPT-generated code contained an error, he simply asked it to fix its mistake — which it promptly did. Impressed, Watta gave the senior-level students in his mobile devices course an assignment: Design a simple mobile app that interacts with a web-based API and see how much of the work you can get ChatGPT to do for you. “Some aspects of programming are just plain tedious and thankless work,” Watta says. “For example, when working with a web-based API, you’ll have to read the documentation to see how the data is organized, figure out how to formulate the proper HTTP request, and then parse the response to get the desired information — which is all you really cared about in the first place! So if ChatGPT can help with that, it’s going to free up programmers to work on the more interesting and advanced aspects of software development, like creating an effective user interface or building something that interacts with five different web APIs. It could make programmers a lot more productive.”

Watta notes that this is an assignment he can give his more advanced students because they’ve already learned to code. For more novice students, he says ChatGPT could be a little more problematic. For example, for the final project in one of his introductory programming courses, students use a fairly obscure coding platform called openFrameworks. “I doubt even most of the faculty in my department know what openFrameworks is,” Watta says. “But I asked ChatGPT and it knows what openFrameworks is. And, when prompted, it started spitting out code! For a beginning programming class, it is entirely possible that students could try to use this as a shortcut to generate code that they do not truly understand.”

Cheating and privacy concerns

Without question, cheating via ChatGPT is a big deal, and not just within the realm of coding. In fact, because ChatGPT is so good at creating coherent long-form or short-form text, disciplines that rely heavily on writing for assessments could face the biggest challenges. So far, UM-Dearborn faculty don’t seem to be panicking, says Autumm Caines, an instructional designer at the Hub for Teaching and Learning Resources who’s been following the generative AI phenomenon for a couple years and is helping faculty concerned about ChatGPT. However, she says the cheating risks could change how we think about assessments. “For a long time, in higher education as a whole, we’ve allowed writing to be a proxy for learning. I assume I know what’s going on inside your head if you wrote it down in a paper,” Caines says. “But if now we’re entering the realm of generated text, we may have to think about multimodal assessments — so not just writing, but doing a project, or a video or audio interview, or an oral exam.” In fact, Caines notes that UM-Dearborn’s recent investment in practice-based learning puts it in a good position to weather the ChatGPT storm. When the nature of a course is to create and record an original podcast or design and build educational toys for kids, there’s only so much a chatbot can do for you.

Alongside cheating, privacy ranks among Caines’ other big concerns. Like any deep learning-based artificial intelligence, ChatGPT evolves by being exposed to more data, and every time you enter a new prompt, you’re feeding it. This has already caused issues for big technology companies, who’ve noticed bits of their proprietary code showing up in ChatGPT responses, presumably because employees were using it to help them write code. Caines says students — and professors asking their students to use ChatGPT for assignments — should be aware that “this thing is not your friend.” “Don’t ask it medical questions, don’t share anything too personal,” Caines says. “If you read the terms of service, OpenAI is very clear they’re going to use your queries to train the tool, that they are collecting personal data, and that they can sell your personal data to third parties. We simply don’t know what they’ll do with it, but there is always a huge profit incentive in selling your data.” Caines suggests instructors brief students on the privacy concerns if they’re going to assign work that uses ChatGPT. Setting up “burner” email accounts for use as login credentials can also give students and faculty an added layer of protection.

ChatGPT is still far from perfect

Whether one’s intentions are good or bad, it’s also important to remember ChatGPT is a work in progress. Its text outputs tend to be boilerplate. It’s not always accurate. Sometimes it repeats conspiracy theories or lies outright. Malik says ChatGPT may be trained on a huge pile of data, but it has no sense of what’s true, so you have to take what it writes with a grain of salt. Moreover, if students are looking to cheat, he says algorithm-generated content usually carries artifacts that make it somewhat straightforward to detect whether something was generated by a human through a natural process or an algorithm good at mimicking one. Indeed, tools used to detect ChatGPT-generated text have proliferated since the software’s release, and Malik, who’s an expert in detecting deepfakes, seems optimistic about using machines to fight machines. Caines notes, however, that at least some of the available tools can be easily fooled. Since it’s still early days, she cautions faculty against using these tools to render an absolute verdict on whether a student is cheating. 

Even with these present shortcomings, Watta sees the arrival of ChatGPT as foreshadowing. In his own department, he could see a day where instructors like him aren't spending so much time teaching students how to write code, but how to read, interpret and fix code, leaving the more tedious work to the machines. “In fact, this is something we’ve been talking about at the university for quite a while,” Watta says. “Our former CECS dean, Tony England, and our current dean, Ghassan Kridli, have been saying that the future of education will involve students having these AI assist tutors. So we’ve expected this may be coming. Now, it’s just coming to pass.” 


Story by Lou Blouin