Teaching Algorithms with Societal Context

Sanchit Batra, Veronica Vitale and Supratik Neupane

The Vicious Circle: poor people take longer to recover from illness; ML-based algorithm: deny health insurance to poor people.

The Vicious Circle

Undergraduate Student Project


Computers do not possess the ability to think, and so it might seem reasonable to assume that they are free from human biases. While a human may be swayed by their emotions, we can count on computers to deliver consistent, dependable results. Or can we?

My name is Sanchit Batra, and I am a senior in Computer Science at the State University of New York at Buffalo. I worked on this project under the guidance of Dr. Atri Rudra, and along with fellow Teaching Assistants to bring it to fruition. The purpose of the project was to debunk the notion that computers are free from human biases and explore the real-world consequences of the algorithms we write as computer scientists.

It is easy to forget that computers do as they are told, through algorithms. If they are programmed by a biased human being, or if they operate on biased input data the effect will most certainly be seen in the output they produce. For example, suppose the computer is told the Yankees have lost every single match they have ever played, while the Red Sox have won every single one since their inception. If you now ask the computer to predict the outcome of a Yankees v/s Red Sox match, it will (with 100% certainty no less) predict a Red Sox win. Perhaps that is the likely outcome, but what if the Yankees lineup changed? What if they have a better coach now? What if the best Red Sox players are all injured?

Now with that in mind, we ask ourselves: can we trust computers to deliver a fair judgement in court trials? Can we trust that internet routing algorithms do not discriminate against consumers? Can we trust algorithms, and thus computers to be, in other words, unbiased?


In the Fall 2019 undergrad Algorithm Design course, we explored how unethical algorithm design has real-world consequences. Students played the role of an employee at an ISP, and made decisions such as: do you favor high-paying clients, or do you ensure fair internet access for all? Students considered these trade-offs and designed routing algorithms. Development tasks included designing a routing simulator, generating test cases and developing the grader code.

In a Spring 2020 upper-level course on the societal impact of algorithms, we looked at how bias inevitably creeps its way into algorithm output, specifically due to biased input data. We attempted to debunk the myth that algorithms, since they are based on math "have" to be unbiased. Development included creating a package that interfaced with the Vizier Data Exploration Tool and allows a non-expert to explore the machine learning pipeline without writing code, along with analyzing unintended biases in models.

See the Full Poster

Click on the file below to see the full poster in your browser. 

Digital Accessibility

The University at Buffalo is committed to ensuring digital accessibility for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards to ensure we provide equal access to all users. If you experience any difficulty in accessing the content or services on this website, or if you have suggestions about improving the user experience, please contact the Experiential Learning Network via email (ubeln@buffalo.edu) or phone (716-645-8177).