That’s why it’s important to integrate these tools into curricula. Student phones, tablets and laptops are flooded with tech hype. Where better for students to gain insights into how these technologies work and their potential impacts on society?
“Just say no” (also known as “do not use this tool in this class”) is not going to work here any better than it works with calls for abstinence from booze, sex or video games. Moreover, that tired admonition paints the writer (or teacher) with a brush from the “clueless” palette.
First, some definitions
In common parlance, “technology” has become a shorthand for “computerized stuff.”
However, technology is the practical application of knowledge. Products of technology range from the basic (such as the wheel and the lead pencil) to the exotic (such as the Webb telescope and personalized cancer treatments).
The term “artificial intelligence” (shorthand, AI) entered culture in the mid-20th century. The British mathematician Alan Turing investigated the concept of machine intelligence in the paper, Computing Machinery and Intelligence. He “believed that a computer could be described as intelligent if it can mimic human responses under specific conditions.”
Aside: science fiction short stories, novels and movies
took the idea of AI and ran wild.
“Machine learning” is a process; computer scientists use large data sets to train algorithms (computer code) to do a specific thing. For example, the suggestion of the next word that you see when you are typing a text or a search query on your device (auto complete) is a form of machine learning.
If you’ve asked Siri or Alexa a question, or accepted a suggested photo tag on Facebook, you have employed an algorithmic tool.
Tools that employ algorithms like these rely on extremely large datasets to determine the probability that word x, y or z might follow word (or words) a.
Technologists call them large language models (LLM).
Next, generative AI
Generative algorithmic tools (“AI”) respond to a user promp to create realistic artifacts (thus they are disruptive technologies). That response might be text (ChatGPT), an image (MidJourney) or a video (Visla).
Here’s an example of a one minute AI-generated video.
Prompt: Create a video for undergraduate students that explains generative AI (text, still image, video, audio) and offers both cautions and suggested ways to use ethically.
How might we enhance student critical thinking about these technologies? It should go without saying that we experiment with the tools ourselves. Right?
1. Provide students with examples of deepfakes.
I am a product of both humanities and science. When it comes to this topic, I believe all teachers should start from the premise of how technology affects society.
Once upon a time, we described manipulated images as being “Photoshopped.” Now we have deepfakes, which are false images, audio or video created with the intent to deceive.
Generative AI is faster and potentially more harmful than Photoshopping. The false information may appear to be official or from a trusted news source. Case in point: a 2022 deepfake video of Ukrainian president Volodymyr Zelensky falsely called for troops to surrender.
A spring meme: Pope Francis in a puffer coat.
2. Demonstrate how ChatGPT “hallucinates”.
An LLM like ChatGPT often makes stuff up; these fabrications have been dubbed “hallucinations.”
For example, University of Oxford scholar Stephanie Lin recently researched LLM answers to 817 questions “that some humans would answer falsely due to a false belief or misconception.” She reported the results at the 2022 computational linguistics conference. The best LLM scored 58% (an F); humans, 94% (B+ or A-). Subjects included finance, health and politics.
Notably: “the largest models were generally the least truthful.”
In June, the Internet went a little berserk when a judge in Manhattan admonished a lawyer for citing non-existent cases in a legal brief.
“I did not comprehend that ChatGPT could fabricate cases… I heard about this new site, which I falsely assumed was, like, a super search engine.”
The director of the internet ethics program at Santa Clara University, Irina Raicu, told the New York Times: “the vast majority of people who are playing with [tools like ChatGPT] and using them don’t really understand what they are and how they work, and in particular what their limitations are.”
Students should leave our classes with an understanding of tool limitations.
One way to teach that lesson about hallucinations is to assign students a general research topic. Have them customize it to their personal or professional interests. If you don’t want to force students to sign up for ChatGPT, create a few small groups. Have them collaborate on a 100-200 word query (asking for sources) which they turn in.
Then you (or your grad students) run those queries through ChatGPT. Provide the responses to each group in the next class. Have them vet (or attempt to vet) the sources.
Experiental learning FTW.
3. Integrate generative AI into assignments.
Students in my introduction to tech comms class (undergraduate engineers, not creative writers) create blog posts, presentations and recommention reports that are research-based. Not the personal essay!
- Brainstorm a topic (ChatGPT only)
- Create blog post titles (ChatGPT only)
- Edit for conciseness
Most students tell me that they want to learn how to write more concisely. I suggest that they paste part of their blog post or report into ChatGPT and direct the tool to suggest ways to be more concise (or sound more professional). Other writing tools make suggestions like these automagically. I ask only that students note that they used a tool when turning in the assignment.
- Revise text for spelling and grammar
Tools like Grammarly automagically highlight suggested improvements. So does Microsoft Word (although I think it misses things).
- Stimulate discussion
Students can ask ChatGPT for pros and cons of an issue or background on a theory. “Students can learn to appreciate other arguments and see both sides objectively before establishing an opinion,” according to James W. Pennebaker, PhD, a professor of psychology at the University of Texas at Austin. [Keep hallucinations in mind, however. It’s best if the students are aquainted with the topic.]
- Use ideas from other teachers!
Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.
4. Develop and explain boundaries for use in your classes.
See if your university (or school) has developed a policy about the use of generative AI. It’s more likely that they may have one today than in January 2023.
Regardless, develop a clear guide that details how students can and can’t use these tools in your class. The guide should include how to credit those tools.
ChatGTP 4.0 made these suggestions when I asked it to draft guidance for using generative AI tools like MidJourney and ChatGPT.
- Accuracy: ChatGPT’s responses may not always be entirely accurate, so students should always verify information obtained through the tool.
- Cost: Depending on the version and licensing, MidJourney can be expensive, which may not be feasible for all students.
- Learning curve: MidJourney can be complex, and students may need time to become proficient in its use.
- Over reliance: Depending too heavily on ChatGPT may hinder student ability to develop independent research and critical thinking skills.
- Plagiarism risk: Students must take care not to directly copy content generated by ChatGPT without proper citation, as it may be considered plagiarism.
5. Teach students how to credit the output of these tools.
Crediting the ideas of others is a key component of university ethos. Academic integrity rests on this concept.
There are (at least) two ways students might use algorithmic tools:
- Use AI to create content. Students need to credit the tool when incorporating this content (e.g., code, data, image, text or video) into their own work.
- Use AI for analysis, brainstorming or understanding an idea or topic. This is similar to using Google or Bing (especially since they now incorporate generative AI). Normally using AI as a learning tool does not require citation. However, functional use — e.g., accepting a suggested idea or editing prose — should be credited.
Suggested citation styles:
There is no “finally”
The challenge is not new. I’m guessing you did not know that in 1975, the National Advisory Committee on Mathematical Education (NACOME) suggested that students in grades 8-12 should be allowed to use calculators* in class and on exams. Five years later? All grades.
In 2017, then-acting NASA administrator Robert M. Lightfoot Jr. warned Futurism readers:
Lightfoot began by noting how quickly progress moves in today’s world, and how this may leave some young people (and some educators) at a loss: “By the time you are a junior in college, what you learned as a freshman is already obsolete.” Of course, he notes that there are some basics you will always need, “there are some fundamental skills that are required either way. If you are in a science program, you need science. If you are in a technology program, you need engineering and math. That’s just the bottom line (emphasis added).”
Our goal in the classroom should be guiding our students to think like engineers or journalists or doctors … not merely “know” what those professionals “know.”
What we “know to be true” can change pretty darn fast. The future needs critical thinking not memorization. And more than a pinch of lifelong learning.
- ChatGPT generated a lesson plan for us and we taught it. Here’s what we learned
- Free AI resources for your classroom
- GenAI chatbot library for educators
- The ChatGPT Prompt Book
- ChatGPT: hallucinations about weather data
- How might intelligent machines affect the workforce?
- Microsoft fumbles again due to unbridled technological optimism at MSN
- Trust: the challenge facing “artificial intelligence”
- What might Geoffrey Hinton say about algorithmic developments such as ChatGPT and MidJourney?
- Yes, ChatGPT can answer questions. Can you trust it? Nope.
Substack (and other newsletters or publications)
- Human Librarian Interviews ChatGPT
- On Tech: A.I. Newsletter, NYT
- The Algorithm, MIT
- The Memo by LifeArchitect.ai
- Bernard College, student guide to generative AI
- Elon University, preparing students to be literate and critical AI users
- Kansas University, an instructor guide for easing into AI
- University of Illinois, generative AI guidance for students
- University of North Carolina, guidance for instructors
- University of Washington, Health Sciences Library guide to AI
* In 1975, a (rudimentary) calculator cost about $700 (real dollars, adjusted for inflation). You can buy a new, basic iPhone for $429.
K.E. Gill. Private communication, OpenAI, ChatGPT (version 4.0), a large language model. Sep. 20, 2023. https://chat.openai.com
K.E. Gill. Private communication, MidJourney, a text-to-image model. Sep. 20, 2023. https://www.midjourney.com