Discussion
Are you feeling concerned about AI? Are you excited about the possibilities? How do you see AI impacting your teaching and learning practice?
What Is Artificial Intelligence?
“Artificial intelligence” is a catch-all term that encompasses a wide range of machine learning technologies that use large data sets – collections of information – to make predictions or conclusions.
AI can be classified as Narrow, General, and Super. You may also hear the term “weak AI” to describe Narrow AI, which is where AI is designed to complete a specific task. All AI we can access currently is Narrow, or weak. In the world of artificial intelligence, General and Super AI are still just theoretical. The idea is that General and Super — also called “strong AI” — would be comparable (General) and superior (Super) to human intelligence as we know it now. Weak AI does not have consciousness or self-awareness and it doesn’t understand tone or feeling (though we’re pretty good at making a tool like Siri seem like it does) and it isn’t sentient.
“Generative AI” is the class of tools where the AI doesn’t make decisions or predictions but instead appears to create – or generate! – something like an image, a paragraph, a video, or a sound file.
Can I use Generative AI tools like ChatGPT in my teaching and research?
Disciplinary conventions are still evolving, so it is important to refer to professional guidelines in the use of AI. AI should not be considered a co-author or contributor, because AI cannot take responsibility or be held accountable for the work that is produced. It is best practice to disclose where you have used AI: this might be appropriate to include in an introduction, methodology, acknowledgments, or in discussion with your students in a teaching context. This editorial from Nature offers some useful editorial guidance on the topic of AI authorship and disclosure.
Is it cheating if my students use Generative AI?
This will depend on the parameters of the assignment and the learning objectives of the course. If outlining an essay is a key skill being evaluated, for example, it would be inappropriate for a student to create that outline using Generative AI. However, if outlining is not something taught or evaluated, but presents a barrier to a learner, Generative AI may be assistive. Likewise, if a student has been asked to draw an image themselves, using Generative AI would be inappropriate; but a student might reasonably choose to use Generative AI to create images to illustrate a slide deck or report.
It is important to discuss the potential uses of Generative AI with your students and to establish where use is acceptable or unacceptable. A blanket advisory is not really possible.
What can I do to encourage students to not use Generative AI in my courses?
Some instructors are feeling very concerned about students using Generative AI to, for example, write essays or other written assessments for them. There are some things we can do to design writing assessments that are more resistant to AI:
- Evaluate students on process, not only on the final product. For example, you might collect essay outlines or research proposals for evaluation and place less weight on the final essay. Meeting with students to discuss their writing process is also helpful.
- Include components of self-reflection, including reflection on prior learning or the student’s own life or work contexts, in assessments.
- Consider why you are assigning essays and whether the essay as an evaluative form reflects the learning objectives in your class. Could you explore project-based learning or the “unessay” instead?
It’s also important to keep open lines of communication with students about these tools. Consider exploring the limitations of Generative AI with your students by, for example, asking it to create a bibliography for an assignment and then checking whether the sources it provides are reliable (or even real!).
Is there a technology that can “catch” usage of Generative AI? I’ve seen advertisements or been approached by a vendor.
In short: no. The existing tools for detecting AI have extremely high false positive ratings and have not been extensively independently tested. Given the speed with which AI develops and changes, seeking a technological solution is entering an arms race that we cannot win. Revising our pedagogies with strategies that make for more meaningful learning anyway, including some of the strategies above, is a better approach.
Are there ethical issues to consider beyond academic integrity?
- Copyright and intellectual property: the materials used to create the data sets are largely taken without permission or informed consent, and it has not yet been legally determined who owns its outputs. Consider how using this tool might complicate our understanding of academic integrity and what it means to “do your own work.”
- Labour issues: like many technological tools we rely on, ChatGPT is made usable because of underpaid and traumatic labour in the Global South. Consider how using this tool might trouble our collective values relating to EDI and decolonial principles.
- Discrimination: because AI data sets come from our real world, with all its inherent racism, ableism, sexism, and so on, AI tools can also generate discriminatory outcomes. Consider how using this tool might trouble our understanding of equitable inclusion.
- Climate change: the race to develop increasingly sophisticated Generative AI is not carbon neutral. Consider how using this tool might trouble our sustainability values.
For more discussion about ethical issues and Generative AI, please see the TRU Digital Detox 2023 on AI and education.
Reflection
We’ll take five minutes together for the following exercise. Think about how you will talk about artificial intelligence with your students and whether or not you feel engaging with AI is appropriate for your discipline. Jot down your thoughts.