ChatGPT 101: The risks and rewards of generative AI in the classroom
The rise of generative artificial intelligence tools like ChatGPT is prompting many educators to reimagine the role of technology in the classroom.
At the 含羞草传媒, Susan McCahan, vice-provost, academic programs and vice-provost, innovations in undergraduate education, has been on the front lines of the response to this fast-evolving technology.
McCahan, a professor of mechanical and industrial engineering in the Faculty of Applied Science & Engineering, says the proliferation of generative AI tools presents both opportunities and challenges for higher education.
Her office is supporting and .
She recently spoke to U of T News about the lessons that have been learned about the academic implications of generative AI and the big questions that still remain.
What are some of the ways generative AI is impacting teaching and learning?
Large language models have significant implications for how we teach coding and writing because it will change the way people code and write 鈥 particularly when it comes to routine tasks.
A lot of the writing I do in a day isn鈥檛 deeply intellectual. It鈥檚 the kind of writing that LLMs do pretty well. However, it鈥檚 probably not going to write as well as me when I鈥檓 writing an academic paper, because of my knowledge and understanding of the field and my own unique perspective.
Right now, the technology is pretty good at writing at the level of a first-year or second-year student, but it鈥檚 not up to what would be expected of a student in their third or fourth year.
The biggest challenge is making sure students are still progressing to that third- or fourth-year level if they are taking shortcuts in their first years of university 鈥 or even high school or middle school.
People have compared this to a calculator, but I don鈥檛 think that鈥檚 the right analogy because a calculator is a very domain-specific tool and generative AI has much broader applications.
There was an existential crisis in math education in the 1980s when calculators capable of symbolic manipulation came along. Educators questioned if we should teach our students how to do differentials and integrals if these programs can solve those complex equations. Yet, we came through that, and we still teach students how to add and subtract, multiply and divide, do differentials and integrals. We also teach students how to use these symbolic manipulation programs in ways that allow them to go deeper than if they were to do it all by hand.
I think we will come to a point where people recognize when it is useful to use AI to help and when is it not going to be very helpful. Hopefully, we will arrive in a place where it allows people to advance through the basics faster and move on to more complex writing and coding.
Does U of T consider the use of generative AI tools to be cheating?
We expect students to complete individual assignments on their own. If an instructor decides to explicitly restrict the use of generative AI tools, then their use would be considered an 鈥渦nauthorized aid鈥 under the . This is considered an academic offence and will be treated as such.
Some might ask why we don鈥檛 classify this as plagiarism. One of the biggest misconceptions that people have is that LLMs take what鈥檚 on the internet, mash up the text and ideas and repackage it as a compilation. However, that鈥檚 not how the technology works.
Tools like ChatGPT are trained on large amounts of online materials to identify patterns of speech and make predictions about words most likely to go together. If I say, 鈥渙ne, two, three,鈥 it knows that 鈥渇our鈥 probably comes next. It knows 鈥渇our鈥 is a noun, but it doesn鈥檛 associate the concept with a square or the horsemen of the apocalypse.
When you enter a prompt into ChatGPT, it鈥檚 not combing through information to produce sentences or paragraphs or ideas 鈥 it鈥檚 making word-by-word predictions that imitate patterns of speech around a subject. That鈥檚 why we don鈥檛 treat the use of these tools as plagiarism; we treat it as an unauthorized aid.
What resources are available to help instructors adapt to this emerging technology? Are there any best practices they should follow?
We鈥檝e put together an addressing some of the considerations around generative AI, while providing instructors with resources to help them communicate what technology is 鈥 or isn鈥檛 鈥 allowed in their courses.
I think we鈥檙e in a moment when it鈥檚 really important for faculty to be really clear on their syllabi about whether they explicitly allow it or explicitly don鈥檛. If it is permitted, it should be clear how AI tools can be used, for what assignments and to what degree, and if students must explain, document or cite what tools they use and how.
This is new, and both faculty and students are not altogether clear if this will be the next Wikipedia of the world 鈥 where everyone uses it, but no one talks about it anymore. Or if it should never be used because it鈥檚 just unreliable.
What are some other considerations around the use of generative AI in an academic context?
LLMs often get things wrong 鈥 and very confidently wrong. For example, back in January, I asked ChatGPT for my biography. It told me that I had worked at the University of British Columbia and I was a leading researcher in biomedical engineering 鈥 things that seem believable, but are factually untrue. The technology has improved since then, but LLMs still get things wrong in ways that are not immediately apparent or obvious. These are called 鈥渉allucinations,鈥 and they can be so subtle that they鈥檙e hard to detect unless you really know the subject.
Ultimately, the student is responsible for the material they submit, and if they鈥檙e submitting material that is factually wrong, they鈥檙e responsible for it. You can鈥檛 blame the chatbot, the same way the chatbot can鈥檛 take credit. It鈥檚 not like a team project where you鈥檙e working with another student, and you can say, 鈥業t wasn鈥檛 me, it was my partner.鈥 If your partner is AI, you are responsible for all of the work you submit whether or not there are parts that were co-created with AI.