News and views for the UB community
Published October 17, 2018 This content is archived.
Hard decisions about criminal justice are increasingly being turned over to “smart machines” that use computer algorithms to analyze vast amounts of data to make decisions such as where to deploy more police patrols.
An ambitious new project spearheaded by Jonathan Manes, assistant clinical professor of law and director of the School of Law’s Civil Liberties and Transparency Clinic, along with five UB colleagues, will examine ethical and social concerns raised by the increasing use of artificial intelligence, or AI.
The group has been awarded $25,000 in seed funding for a year-long series of projects as part of the UB’s Germination Space program, which promotes interdisciplinary research on major societal challenges.
“There’s a widespread idea that the computer is objective, but we’re increasingly aware that the design and implementation of these tools involve questions of judgment and values,” Manes says. “This grant is meant to bring together people who are building AI tools — the computer scientists and engineers — and people who are thinking about how they affect society.
“We want to work in both directions — to build concerns about ethics, fairness and accountability into the tools as they’re developed, and to think about ways to regulate the tools after they’re built. I’m learning from my colleagues in computer science and other technology disciplines about how these systems work and how the law can respond in a way that preserves fairness and accountability.”
The researchers will look specifically at ways machine learning is applied to the criminal justice system for such purposes as recommending criminal sentences and bail decisions; determining how to allocate police resources geographically, also known as “predictive policing;” and processing video to identify suspects and suspicious behaviors.
Manes’ collaborators include Matthew Bolton, assistant professor of industrial and systems engineering; Kenneth Joseph, assistant professor of computer science and engineering; Atri Rudra, associate professor of computer science and engineering; Mark Shepard, associate professor of architecture and media study; and Varun Chandola, assistant professor of computer science and engineering, and principal investigator on the grant.
The project is expected to create new legal practice opportunities for student attorneys in the Civil Liberties and Transparency Clinic. Funds from the grant also will support legal efforts to obtain data and information about criminal justice algorithms, and a student research assistantship for work on legal and policy projects.
In addition, the researchers plan to organize a speaker series involving visits by six experts in “ethical AI” and a major workshop featuring an invited speaker and presentations by UB researchers.
The project’s long-term goal is to establish an interdisciplinary Center for Ethical AI to continue study of these emerging issues, with support from outside funding.
Input from professors and students from the School of Social Work should be included for projects dealing with social/criminal justice issues like these. Their raw, humanistic perspective complements many majors, and AI is no different.
Best of luck on establishing the center!
Sonya Tareke