research news

UB researcher demonstrates power of AI in social sciences

Rachael Hinkle pictured in the Law Library.

UB political scientist Rachael Hinkle applies computational text analysis to judicial politics, particularly in the U.S. Court of Appeals. Photo: Meredith Forrest Kulwicki

By JACKIE HAUSLER

Published March 18, 2026

Print
“Machine learning here is not about replacing human judgment. It is about complementing it. ”
Rachael Hinkle, professor
Department of Political Science

Machine learning is often associated with chatbots and image generators, but within the social sciences, it’s a powerful tool for understanding institutions, inequality and the development of law. Few demonstrate this better than Rachael Hinkle, director of undergraduate studies and professor in the Department of Political Science, College of Arts and Sciences.

Her research illustrates how machine learning can help us ask and answer meaningful questions about the world around us as she applies computational text analysis to judicial politics, particularly in the U.S. Courts of Appeals, just below the Supreme Court.

After clerking for federal judges in the U.S. District Court for the District of Arizona and on the U.S. Court of Appeals for the Sixth Circuit, Hinkle saw firsthand how institutional processes shape legal outcomes. “Seemingly small procedural rules can have enormous downstream consequences,” she says. “Beginning in the 1970s, the Court of Appeals began designating some decisions as ‘published’ (precedential and legally binding) and others as ‘unpublished’ (non-precedential). However, this distinction matters greatly.”

For years political scientists studied only published cases, ignoring tens of thousands of unpublished ones. This set Hinkle on a journey to tirelessly research and compile her most significant findings in the book “Selective Publication in the U.S. Courts of Appeals.” The basis of her book outlines how “unpublished” decisions are less likely to be cited, less likely to reach the Supreme Court and often perceived as less important overall.

Using a massive dataset of over 200,000 cases and advanced computational techniques, Hinkle examined how these two distinctions differ and what factors drive publication decisions across the 12 federal circuits. She found that judicial ideology, strategy, cooperation, race and gender all play roles in shaping outcomes and publication decisions.

“Machine learning here is not about replacing human judgment,” Hinkle notes. “It is about complementing it. Big data and AI enhance the human-centered, expert analysis that social scientists have always done incredibly well. While machine learning can help us extract and organize information, reduce complexity and uncover patterns invisible to the naked eye, it cannot ever replace critical thinking,” she adds.

Launched in January 2025, her “C3PO” dataset website brings transparency to this institutional practice, offering other social scientists and the public a comprehensive resource. Hinkle plans to continue to add to the datasets for public consumption to continue to bring light to this matter.

In addition to her research and publications, Hinkle teaches undergraduate and graduate courses in the Department of Political Science, including PSC 302: Protecting Civil Liberties and PSC 543: Text as Data. The latter course allows students to flex their computational text analytics skills, transforming words into numbers that can be systematically analyzed.

Her students have the opportunity to jump right in and learn to code in programs like Python. They learn from the ground up how to implement tools, apply statistical techniques, reduce dimensionality in large datasets and validate that their hand-coded models are accurate. In her classroom, students write, run, edit and re-run code, seeing both the process and results in one place. Hinkle teaches her students to ask meaningful questions, measure carefully, validate rigorously and interpret responsibly.

“Machine learning, when used responsibly, can help us sort through information overload when analyzing consumer reviews, market research data, banking trends or campaign messaging,” says Hinkle. “The AI algorithms frequently produce averages based on patterns in historical human-generated data; that means bias can be baked into the process. So I am talking with my students about things like, Where was the algorithm trained? On what texts? Generated by whom? Is the information objective?”

One of Hinkle’s graduate students, Jason Arenos, is studying environmental legislation. Arenos is looking at how some bills using the “environmental” nomenclature strengthen environmental protections while others scale them back. He is reading and hand-coding hundreds of bills to investigate how this can influence what people think is happening positively or negatively with environmental legislation.

“They will train a model to classify the remaining texts and suddenly, new research questions become measurable,” says Hinkle. Arenos seeks to help answer incredibly important questions like, “Are we protecting the environment more or less over time? Do partisan divisions affect passage rates? Do protective laws take longer to pass? Do they last longer once enacted?”

Hinkle notes that machine learning in the social sciences is not about chasing technological trends, because she and her colleagues were teaching computational methods long before generative AI became mainstream.

Across her projects and teachings, one theme remains constant: language is data. Hinkle plans to continue to analyze and advocate that judicial opinions are not just narratives; they are measurable artifacts that reveal institutional behavior.