Published November 17, 2021
The temperatures are dipping, we have not seen sunlight in three weeks, and the Bills continue to play with our emotions, which can only mean the end of another fall semester has arrived. With it, of course, thoughts begin to turn toward holiday meals, the beginnings of a festive mindset, and the cheery prospect of course evaluations opening.
If you are teaching a standard 15-week course, you should have received a notification that your evaluations are available for your review prior to them being launched to students. Please review your courses carefully and, should anything be amiss, let us know at firstname.lastname@example.org so we can get it taken care of for you. Now is also when you can add custom questions, to get to some interesting feedback from your students.
Today, though, we are here to talk about what is new and forthcoming in the world of course evaluations. The merger of the Office of Educational Effectiveness and Center for Educational Innovation into the new Office of Curriculum, Assessment and Teaching Transformation will allow us to leverage our greater analytic and outreach resources in ways that will make them more reliable and useful to you, the instructor.
To begin with, many people even passively acquainted with student feedback on course evaluations are aware of the ongoing debates on potential bias. There is conflicting research from beyond UB on whether and to what degree student bias affects course evaluation feedback. One of our first big initiatives, then, will be to look at UB evaluations specifically to evaluate for bias against faculty based on gender, race, ethnicity, or area of study.
If you have been the victim of explicit bias in student feedback on course evaluations, there is a policy already in place for remediation. Please use the Information Intake Form from the Office of Equity, Diversity and Inclusion to report it.
Work is also beginning on a number of other initiatives to improve feedback validity and applicability. We are beginning deeper dives into factors that affect completion of surveys and response rates, including the length of the survey (at what point does survey fatigue actually dissuade students from completion?) and what effect, if any, mid-semester evaluations have on overall completion.
We are excited to move forward with a project that could offer tremendous insight not only into the courses that students complete but those they do not. For obvious reasons, when a student drops a class in HUB, the system updates overnight to remove their registration in the evaluations for that course. For the past couple of semesters, however, withdrawing from a course has prompted students to take a “dropped class survey,” a much shorter survey that asks a number of questions about the circumstances of and reasons for withdrawing: why did you withdraw? Did you consult your advisor? Do you plan on taking this course again? And so on. Now that we have amassed a few semesters worth of data, we can begin the more arduous process of running validity studies. If the data are determined to be reliable, these results should offer tremendous insights into why students drop courses and how departments can shape their offerings.
Over the next few semesters, we will also begin promoting the myFocus Instructor Development Tool, a powerful feature of SmartEvals that identifies instructors’ top strengths over time, allows them to share knowledge, and offers tips on what adjustments in teaching will improve your scores the most. Keep an eye out over the next few months for information on how to get started!
Finally, and as always, the key to these exciting projects and initiatives remains the same: student responses! For data to be useful, it must be representative, and the best way to make it representative is to increase response rates. The Course Evaluations website has a number of easy to implement strategies and tips to increase response rates, but the most effective may also be the easiest: make sure to remind your students!