campus news

Faculty, administrators weigh in on ChatGPT

Concept of using artificial Intelligence featuring a close up of hands on a laptop keyboard with a graphic of CHAT GPT in the foreground.

ChatGPT, the new artificial intelligence chatbot that can articulately produce answers and writing assignments on command, is raising questions — not only about how best to use it in the classroom, but how to avoid widespread cheating.

By JAY REY

Published March 2, 2023

Print
Kelly Ahuna, Director of the Office of Academic Integrity.
“‘How will you know?’ is the question we’re getting. ”
Kelly Ahuna, director
Office of Academic Integrity

UB faculty can decide for themselves whether they allow their classes to use ChatGPT, the new artificial intelligence chatbot that can articulately produce answers and writing assignments on command.

But professors should first talk with their students about why it’s not allowed or how the chatbot could best be used as a classroom tool.

If faculty suspect students are being dishonest and submitting the chatbot’s work as their own, then that’s a violation of UB’s academic integrity policy and should be reported.

“‘How will you know?’ is the question we’re getting,” says Kelly Ahuna, director of the Office of Academic Integrity.

Those are just a few of the takeaways for a UB community still waiting to see how ChatGPT might be changing the campus landscape.

Launched last November by the artificial intelligence lab Open AI, ChatGPT — which stands for Chat Generative Pre-trained Transformer — is widely considered the best artificial intelligence chatbot ever released to the general public. The application answers short prompts to deliver information, concepts and ideas in simple language, producing everything from math solutions to well-written essays.

Within two months of its launch, ChatGPT was already estimated to be the fastest-growing app in the history of the internet with more than 100 million users. Now, it’s raising questions, not only about how best to use it in the classroom but how to avoid widespread cheating.

That was the topic among administrators and faculty during a Feb. 14 “Lunch and Learn” hosted by the Faculty Senate IT Committee and Information Technology, with support from the Office of Academic Integrity and the Office of Curriculum, Assessment and Teaching Transformation. Some key takeaways:

  • UB has no universal policy for or against ChatGPT.

“Just like most things we do here at UB, instructors have complete academic freedom to do what they want to do in their classroom,” Ahuna explained. “It is up to individual instructors to determine how it can be used or not used in your courses.”

For example, she said, professors may not want to allow ChatGPT in introductory courses, as students develop their logic and writing skills.

But, Ahuna added, ChatGPT may be appropriate in more advanced courses to help support student learning and writing.

James Lenker, associate professor in the Department of Rehabilitation Science, School of Public Health and Health Professions, suggested that ChatGPT could be useful to help students brainstorm on a topic or write an outline to follow.

“Chat GPT is one tool,” Ahuna said. “But just as a math professor decides if students can use calculators on a particular exam, every instructor can decide how, when and if they’re going to allow ChatGPT — not only in their class as a whole but even on particular assignments.”

  • Faculty should set very clear guidelines and expectations about ChatGPT.

Faculty are encouraged to talk with their students early in the semester about whether ChatGPT is acceptable to use in the course, said Jeffrey Kohler, associate director of teaching transformation in the Office of Curriculum, Assessment and Teaching Transformation.

“Make sure that you are specifying your policies about AI use in your course, whether you want students to be using AI or not,” Kohler said. “Make sure that they understand not only the parameters that you set, but also the ramifications for use if you are asking them to refrain from doing so.”

“It’s just important that you are crystal clear about that as you are crystal clear about other expectations you have in the classroom,” Ahuna stressed.

  • Consider ways to detect ChatGPT and prevent misuse.

Faculty who suspect students are using ChatGPT dishonestly may want to use detection tools, which are proliferating along with use of the application, Ahuna said. Kohler suggested faculty collect in-class writing samples so they have something to compare. Ahuna recommended closely fact-checking papers.

“We know ChatGPT is making a lot of errors, so as the expert in your field, hopefully that’s something you can catch quickly and that might raise some suspicion,” Ahuna said.

The essays generated by ChatGPT are particularly telling, Kohler noted. They tend to be only five paragraphs long, each with three sentences. Faculty may want to consider assigning longer essays, he said.

Faculty should also try out ChatGPT for themselves, Lenker said. He offered a personal statement for a graduate school application as an example.

“We found that it followed that five-paragraph formula,” he said. “It was generally pretty well written but fairly generic. It didn’t really have much heart.

“It really gave us some sense of relief that you can spot the generality of it, and if we were looking at that essay and evaluating it as we normally might do, we wouldn’t rank it very highly.”

As far as preventing students from using the chatbot, faculty may want to consider refining assignments to more specific classroom material, slides and discussions, making it harder to rely on ChatGPT, Ahuna said.

“This is always going to be a game of cat and mouse,” said Kevin Cleary, clinical assistant professor of management science and systems in the School of Management.

“Certainly, it can be scary given the context; it can be exciting given the context,” Cleary said. “But I think it’s something that will reach a steady state and we learn to adapt to and ultimately learn to live with.”

  • Misusing ChatGPT violates UB’s academic integrity policy.

Students who submit “a report, paper, materials, computer data or examination (or any considerable part thereof) prepared by any person or technology (e.g., artificial intelligence) other than the student responsible for the assignment” is falsifying academic materials and in violation of UB’s academic integrity policy.

If faculty suspect students are using the chatbot’s work as their own, they should set up a meeting with the students, share concerns and evidence, then allow them to respond, Ahuna said.

If faculty have a “preponderance of evidence” in their favor, they should choose a sanction, inform students of the decision by email and report the case to the Office of Academic Integrity.

Students do have the right to appeal. If there is reason to grant a hearing, one will be scheduled. If not, the appeal will be denied and the sanction will stand.

While there have been few reported cases involving ChatGPT thus far, the office should have a better handle of its prevalence as the semester goes on, Ahuna said.

The Office of Academic Integrity is happy to consult with faculty who have questions or concerns about how to proceed when they suspect misuse of ChatGPT in the classroom, Ahuna said.

READER COMMENT

I've used ChatGPT as a learning tool; it was helpful to me to understand the concepts and thinking paths in my engineering courses. As you stated, it makes a lot of errors. Therefore, every time I use ChatGPT, I read the answer to see if it is logical to me.

However, copy and paste should not be allowed. At least rewriting the answer could show how a student has read and absorbed the answer.

Ming Shing Poon