1. You need to talk to your students about AI either way.
This is brand new technology, so it definitely isn’t safe to assume that students will automatically know whether or not they can use AI in your course.
There are lots of different ways to use AI (or not) in higher education and plenty of space for faculty to make choices about what approach is best for them. That said, the sheer number of possibilities might create as much confusion for students as it has for faculty. Open a dialogue with students about the approach you’re taking as we learn together.
2. Be specific about what is and isn’t allowed and give a rationale for whatever policy you choose.
Expectations for when and how students may use AI will vary widely from course to course, so do set clear expectations in your syllabus.
Be specific about what is and is not allowed. Can students use AI to edit their own writing? During an open note test? To quiz themselves on course topics? If so, how should they cite or attribute this use?
This example from Gies College lists permitted uses (check out the link for the full statement):
Generative AI, such as OpenAI ChatGPT, Microsoft Copilot/Bing Chat, Google Bard, and others, can answer questions and generate text, images, and media. The appropriate use of generative AI will vary from course to course. Guidelines for using generative AI in this course are as follows:
- Follow only the specific permitted uses set by your instructor.
- Document and attribute all AI contributions to your coursework.
- Take full responsibility for AI contributions, ensuring the accuracy of facts and sources.
Permitted uses of generative AI in this course include:
- Shortening your own text
- Revising your own text for spelling and grammar
- Creating study aids (e.g., flashcards) for quizzes or exams
- Testing and practicing your knowledge of course topics
- Conducting basic research on course and assignment topics”
This example from Salem University tells students that AI is not permitted because it interferes with the learning objectives of the course (check out the link for additional examples of syllabus statements):
Since writing, analytical, and critical thinking skills are part of the learning outcomes of this course, all writing assignments should be prepared by the student. Developing strong competencies in this area will prepare you for a competitive workplace. Therefore, AI-generated submissions are not permitted and will be treated as plagiarism.
No matter what you choose as a policy, share your rationale with students. Our students are adults and can tell when rules seem arbitrary. When you explain your rationale, whatever it is, you are demonstrating trust and respect for your students.
3. Consider that we do not have an accurate tool to detect AI generated text.
Most AI detectors were not designed to catch students using AI in assignments. Instead, they guess (yes, guess) the probability that AI has been used in a piece of writing. Using them to pursue disciplinary action against students comes with a high possibility of causing harm.
This harm has the potential to hit vulnerable groups the hardest. According to Torrey Trust, “In an empirical review of AI text generation detectors, Sadasivan and colleagues (2023) found ‘that several AI-text detectors are not reliable in practical scenarios’ (p. 1). Additionally, the use of AI text detectors can be particularly harmful for English language learners, students with communication disabilities, and others who were taught to write in a way that matches AI-generated text or who use AI chatbots to improve the quality and clarity of their writing.”
Consider using other approaches – like scaffolding assignments to provide students with plenty of iterative feedback before they submit a final draft – to encourage academic integrity.
4. Remind students of terms of use.
Let’s be honest, few of us thoroughly review terms of use before using a new tool – but in the case of ChatGPT, they might help instructors communicate with students about how to use the technology appropriately. The ChatGPT terms of use explicitly warn users that user data may be used to improve services (section 3b), outputs may not accurately reflect “real people, places, or facts” (section 3d), and that users may not “represent that output from the Services was human-generated when it is not” (section 2c). In other words, by using ChatGPT students have already agreed to be transparent about which parts of the work are human generated and which are not.
Consider giving students options to do this in appropriate ways. For example, an outright ban might have the unintended effect of encouraging students to lie or conceal their use. Asking students to cite sections of writing by AI, keep a journal about how they’ve used it in their writing process, or otherwise document or reflect on their use will allow them to move forward with guidance.
5. Don’t encourage or require students to input personal information into an AI.
AI uses the data we input to constantly improve its ability. However, that means that if students input an essay that discusses their personal history or other sensitive information, that data will be stored by OpenAI and used to further train the tool. If your assignment encourages students to be vulnerable, reflect on private aspects of their identity, or uses personally identifiable information/data, using AI may be inappropriate.
As with any assignment that asks students to work online outside of UW systems, we recommend having an alternative assignment so that students can opt-out of using AI. For example, if your assignment asks students to use ChatGPT to answer a prompt and then they edit the answer, consider letting students instead answer the prompt themselves and then write a short reflection on their editing process.
Finally, remember that we are here to help!
Our librarians are information professionals and have experience adapting to a rapidly shifting information and education landscape.
Marisa Petrich, Instructional Design Librarian
Erika Bailey, Data and Digital Scholarship Librarian