AI robot reading a book.

Academic Integrity and AI

Instructional Insights | by Kelly Ahuna, Ph.D.

Published September 15, 2025

Accurate assessment of student knowledge and abilities is foundational to the work of education. Generative AI tools present an existential threat to that accuracy because students can outsource many assessments to AI, successfully bypassing real learning and thus violating academic integrity. This has introduced the perplexing question for instructors of if, when, and how to allow AI use in their courses. 

This Instructional Insight provides guidance for addressing that question through the concept of “cognitive offloading,” or allowing tools to perform thinking tasks. It makes sense to permit AI when the tools can help students fulfill course learning objectives in the same way that it makes sense to prohibit AI when the tools inhibit students from achieving the objectives. Ultimately, clarity of learning outcomes and careful assessment design to measure them are more important than ever. 

Learning Objectives

  1. Analyze the concept of cognitive offloading and determine when its use—via tools like generative AI—supports or conflicts with course learning objectives.
  2. Assess the impact of student AI use on academic integrity and distinguish between ethical tool use and misuse that undermines authentic learning.
  3. Design transparent AI usage guidelines for your courses that align with learning goals and communicate clear expectations to students

Guiding Questions for This Insight

  • What skills can be cognitively offloaded in my courses without infringing on fulfillment of the learning objectives?
  • How can I help students understand when and how use of AI tools is appropriate in my coursework?
  • How does unauthorized student use of AI threaten academic integrity?
  • In what ways do instructor and student use of AI differ?

Resources From This Insight

What is Backward Design? In Understanding by Design by Wiggins, G., & McTighe, J. (1998)