Hard decisions about criminal justice are increasingly being turned over to “smart machines” that use computer algorithms to analyze vast amounts of data. We are understanding and helping shape how artificial intelligence (AI) tools are used to make decisions that affect the rights and opportunities of citizens
Given the rapid proliferation of big data and artificial intelligence methods, decisions that have previously been entrusted to human judgment are increasingly being made on the basis of machine learning and other computational tools. This remarkable shift to machine judgment, which is bound only to accelerate, raises profound questions at the intersection of machine learning and ethical decision-making about bias, efficacy, transparency, interpretability, and public accountability.
We are investigating questions across a range of AI applications by not only conducting foundational and applied research but also by directly engaging policymakers and private-sector innovators. In the short term, our work by focusing on discrete case studies involving the use of AI in the criminal justice system. We focus predictive policing algorithms, which are used to inform decisions about where and when to deploy police resources, and recidivism risk scores, which are used in sentencing and bail determinations. Our ambition, however, is for a Center at UB that can address the diverse problems raised by both the private and public sector uses of AI.
We are currently exploring a number of key projects across the UB campus including work on: