When Monica Linden, a senior lecturer of neuroscience at Brown University, enlisted ChatGPT to write a portion of a class syllabus, there was some appeal in using software to make writing the syllabus easier, Linden says.
But Linden hoped her students would read the portion of the syllabus generated by artificial intelligence and notice how it differed from the rest of the text, which Linden wrote on her own.
“I knew [an AI program] would write it in a way that didn’t sound like the rest of my syllabus, so it points out one of the weaknesses of ChatGPT,” Linden said. “It writes in a bit of a stereotypical way … [and] might not sound like your voice.”
The ChatGPT text, for instance, comes across as “a little repetitive and sometimes more broad than I would like,” she told her students in a note following the AI-generated statements. “You can also probably tell that it’s not in the same voice as the rest of the syllabus.”
Then, she emphasizes in bold text, “WORK IN THIS COURSE SHOULD BE IN YOUR VOICE!”
That doesn’t mean Linden is barring her students from using the software. As she also tells students in the syllabus – or rather, ChatGPT tells students based on her prompts – students “can input a topic or a writing prompt into ChatGPT and use the output to generate ideas and to understand different perspectives on the topic,” provided they cite their use of the AI, or they may use it for proofreading purposes.
Whether instructors love it or hate it, there’s no ignoring AI software. Since ChatGPT launched as a prototype in late 2022, conversations surrounding AI have been nearly ubiquitous across industries. The chatbot-style software attempts to emulate human-created text, with controversial results.
While many fields speculate on how the software could evolve and impact the workplace, those in higher education have another concern: how students are using it, and whether it’s ethical and conducive to learning.
Some educators have begun to make use of GPTZero, software intended to detect ChatGPT usage. And Turnitin.com, a giant in plagiarism-checking software, recently launched its own ChatGPT-detecting software, claiming the program can detect AI usage with 98% accuracy.
But as quickly as this software comes out, ChatGPT continues to evolve. In March, OpenAI, a research laboratory that created ChatGPT, launched its GPT-4, which it claims can better understand subtlety in language and produce more realistic, complex responses.
While some educators recoil from ChatGPT, others are encouraging educators to embrace it.
Among the ChatGPT proponents is Stephen Atlas, an associate professor of marketing at the University of Rhode Island’s College of Business. Atlas sees so much potential in the software that he co-wrote an open-source book on its ethical usage in the classroom.
His co-author? ChatGPT, as credited in the book’s opening acknowledgments.
The digital book, “ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI,” has been downloaded almost 4,000 times since Atlas published it in early February. He used ChatGPT’s GPT-3 software to generate portions of the text, which he then reviewed and edited into its final, 134-page form.
Atlas says ChatGPT has helped him to overcome challenges in tasks that had slowed him down for years – particularly, keeping up with his email inbox, which he always found daunting.
“For years in my career, staring at the blank page was a source of stress for me,” Atlas said. “And then emails would pile up as I had to respond.”
When he began to use AI to help draft his emails, with disclosure of this usage included, “I was amazed at how much more fruitful emails would become,” Atlas said. “Technology actually helped me to present as more human in communications,” and for the first time, he completely cleared his inbox.
Since his initial success in using the software for emails, Atlas says he’s also found ChatGPT useful in summarizing ideas and fleshing out prompts into complete concepts. Research from the Massachusetts Institute of Technology supports this thinking, Atlas says, noting a study showing that using the software allowed participants to spend less time in the drafting stage with ideas and more time forming ideas and revising.
Rather than attempting to ban the software completely, Atlas thinks educators should encourage its ethical use with proper citations.
While not averse to the software, Linden’s outlook on its potential isn’t quite as glowing. Some educators are concerned that the software can or will develop to perform basic but essential skills – if AI eventually evolves to perform computer science coding, for example, students could hypothetically use the software for this purpose and miss out on foundational programming skills.
“If we want students to be able to develop the skill to write their own code, they need ChatGPT to not be able to do that for them,” Linden said.
Ultimately, she maintains an optimistic outlook on students’ intentions.
“I hope it becomes something that helps students grow, rather than impedes their growth because they’re using it for nefarious reasons,” Linden said. “I think that supporting them in learning how to use it effectively is really important.”