campus news

Students, faculty, staff share thoughts, concerns on evolving AI technology

Concept of AI in Education featuring students using laptops with education icons in the background.

By JAY REY

Published November 14, 2025

Print
“This is not just about faculty setting rules or students asking questions. It’s really about working together. ”
Graham Hammill, senior vice provost for faculty affairs and dean
Graduate School

Students, staff and faculty gathered recently for a town hall meeting on one of the most pressing issues in higher education today: teaching, learning and academic integrity in the age of artificial intelligence.

Faculty shared classroom strategies and policies. Students asked for more clarity on when and how to use AI responsibly. Both agreed they need to navigate this new challenge together at a time when the technology is evolving faster than policies and pedagogies can keep pace.

“This is not just about faculty setting rules or students asking questions. It’s really about working together,” said Graham Hammill, senior vice provost for faculty affairs and dean of the Graduate School.

“Faculty and students have to be in dialogue as partners to uphold academic excellence and to ensure the responsible use of AI in education,” Hammill said. “We have to listen to each other’s concerns, share our experiences and together work toward guidelines and solutions that reflect our values as a larger community.”

The Oct. 28 event, attended or viewed online by more than 200 people, was part of an ongoing campus conversation intended to “foster transparency and inclusive conversation” about AI. It was also the first to invite students into the conversation.  

Panelists included Sarah MacDougall, president of the Graduate Student Association; Aisha Adam Bechir, president of the Student Association; Kiara Wisniewski, a student ambassador for academic integrity; Lara Hutson, teaching professor in the Department of Biological Sciences; Atri Rudra, chair of the Department of AI and Society; and Jay Barber, associate teaching professor of literacy and English education.

Carol Van Zile-Tamsen, associate vice provost and director of the Office of Curriculum, Assessment and Teaching Transformation, and Kelly Ahuna, director of the Office of Academic Integrity, also presented.

“The biggest thing I worry about, and I talk to students about this a lot, is what people are calling the ‘illusion of competence,’” Ahuna said. “AI tools can help us create things that look really good, but if you’re asked about that, or if you have to recreate it on your own without the AI tool, can you do that? Is the competence legitimate?”

Here are five takeaways from the forum.

Faculty are using a variety of AI strategies in the classroom.

In her upper-division courses, Barber often allows students to use AI on their second written draft, as long as they include a “reflection statement” on why they accepted or rejected changes suggested by AI.

“I’m actually finding the majority of students don’t want to take the easy way out and most of them really want to reflect on what’s going on with their writing, and with their voice, when they are using chatbots,” Barber said.

Rather than set an AI policy for the entire course, Hutson has an AI policy for each assignment.

“I try to make that quite clear for each assignment and I explain it every time I give those assignments,” Hutson said.

And Rudra’s overarching philosophy is to allow AI use in elective classes, but not in the more foundational courses.

“To learn, by definition, you have to struggle and the thing these models take away is struggle,” Rudra said.

Students expressed the need for greater consistency in the use of AI across campus.

With no university-wide policy, expectations can vary from class to class or department to department, MacDougall and Bechir said. Students have emphasized the need for clearer, more unified guidelines to help reduce any confusion or added stress, the two student leaders said.

Students cite a growing AI angst.

Some of the student anxiety over AI policy and usage stems from their concerns they will be falsely accused of using AI to cheat, Bechir said.

“I can confidently say most students who are willing to go through the effort of using AI effectively are using it to help themselves learn and not to get out of doing work,” Bechir said. “The majority of us are not trying to cheat the system.”

Besides the growing pressure to validate their own work, some students are also worried their degree program isn’t preparing them to use AI once they’re out in the workplace, Bechir said.

There’s a time and a place for AI.

Even when faculty don’t allow AI in the classroom, students can still use it for a variety of applications, whether it’s making flash cards or creating a study guide, Wisniewski said.

“A lot of people will input their notes into a chatbot and ask, ‘Given this information, what questions could I be asked?’” Wisniewski said. “Or, ‘what short answer questions might come from this? What essay questions might come from this?’”

Students and faculty need to keep an open line of communication.

“We know you’re anxious about getting caught up in a dishonesty situation when you’re not actually being dishonest,” said Van Zile-Tamsen. “So, it is important to review the syllabus for guidelines on AI usage so that you know where the instructor stands. And if you have any questions about what that means, you should go to your instructor and ask.”

“The end goal,” Wisniewski said, “is to have that clear communication with your professor. What can I use it on and what can I not use it on?”