Published July 6, 2022
Siwei Lyu, Empire Innovation Professor of Computer Science and Engineering, clearly saw the problem. Misinformation and disinformation had so polluted social media platforms that untrained users in many cases couldn’t distinguish fact from fiction, or just didn’t care to do so.
But his background as a computer scientist could help restore truth to the top of newsfeeds.
Lyu’s expertise in deepfakes, digital forensics and machine learning could stop online disinformation at the source before its damaging effects could further erode trust across the social media landscape. This was a technical problem that demanded a technical solution. It was simply a matter of deploying the algorithms he helped develop as authentication instruments capable of countering the opposing algorithms responsible for spreading lies.
But the journalists on the front lines of digital media didn’t embrace Lyu’s technical innovations. They knew that identifying fakes wasn’t enough. Users could unwittingly spread sensationalized fiction and bogus accounts with curatorial algorithms amplified fakery by targeting those who (turning the aphorism on its head) might find fiction stranger (more entertaining or believable) than truth — and it doesn’t take much to turn a fundamental truth into a spicy, online falsehood.
“The truth isn’t always the most interesting thing on social media. Truth is essential, but it can sometimes be boring,” says Lyu, who began to see more broadly the problem’s complexity.
Online disinformation is as much a human problem as a technical one. Its roots reach into social, cultural and psychological realms, according to Lyu, extending upon the wisdom expressed in a classic article from the journal Science titled “The Tragedy of the Commons,” which stressed that not every problem under discussion has a technical solution.
In the case of disinformation, technology plays a critical preemptive role, but it can’t provide a complete solution on its own without input from other diverse fields.
Technology by itself, in this case, is a lever without a fulcrum.
“Up to three or four years ago, I was holding tightly to the belief that technology alone was the solution to combating misinformation and disinformation,” says Lyu. “But hearing from those journalists was the initial motivation for me to work with people outside my domain in ways that combine technical expertise with disciplines that understand the human factors required to solve this problem,” he says.
And today Lyu is conducting that work with a multidisciplinary team of researchers at UB’s Center for Information Integrity (CII). Lyu and David Castillo, professor of Romance languages and literatures, serve as co-directors.
The center’s executive committee consists of:
CII is a collaborative platform for research across the university. Similar centers exist around the country that address either the social impacts of disinformation or media reactions to the problem, but UB’s center will take a convergence approach.
Convergence research often focuses on a specific challenge that requires answering scientific questions with an understanding of history in the context of existing societal needs. Its intense integration of disciplines is deeper than a multidisciplinary perspective. Convergence reshapes paradigms and produces new frameworks, or even new disciplines, that can further help address goals.
“It’s the right time for a center like CII, and UB is the right place to bring this expertise together,” says Lyu.
The association of disinformation with social media can create the inaccurate perception that the problem arrived with the digital age. But that’s not so, according to Castillo.
“This problem is as old as humanity,” he says. “There are moments of historical acceleration, including the early modern period which includes the emergence of the printing press culture and mass media. The current age of inflationary media has created a new pattern of acceleration of misinformation and disinformation, which is tied to the emergence of social media.
“We can learn from those historical iterations of the problem.”
While technology works to detect shams, the center can explore and understand why fakery is appealing.
“We need to figure out and explain to people why this is so attractive,” says Castillo. “We need psychologists and media experts, but we also need to understand the economics of the problem. Misinformation and disinformation are profitable commodities for social media companies because they increase audience size, which translates into greater advertising revenue.
“The business model relies on how many people follow a trending topic, not the integrity of the trending topic they’re following. Often times what’s false has more audience potential than what’s true.”
Improving users’ awareness is critical, Lyu says. Greater awareness can inoculate users from misleading information. It’s a new preemptive approach, rather than a technical forensic approach.
“I think the key lies in users being aware and mindful of falsified information on social media,” he says. “The solution lies largely in the hands of users, and teaching that, in my opinion, is more important than government regulation or isolated technical solutions.”
The center has taken the lead on a multi-institutional Deception Awareness and Resilience Training program (DART) that’s working to develop research and educational platforms with tools and teaching techniques designed to improve misinformation awareness and increase resilience.
Lyu is principal investigator on a multi-university team that includes Castillo, Srihari and other CII members. The team has received a $750,000 National Science Foundation grant to collectively develop the digital literacy tools and education needed to fight online disinformation.
The center’s long-term goal is to create a set of adaptable literacy and technical tools that can be customized across demographics. The first step will address those age 60 and older, the group most vulnerable to deception and scams, according to Castillo.
Plans are in place to work with the Amherst Center for Senior Services and the Buffalo and Erie County Public Library to test the approach. CII will also work with K-12 students and educators to improve digital literacy education.
Disinformation’s digital age permutation arrived out of a particular naivety regarding the potential evolution of internet platforms, Lyu explains. He says what exists today is not what the designers intended.
“In the initial stages of development, no one was thinking seriously about the dark corners of this remarkable technical development,” says Lyu. “Now we’re left with a problem to fix and CII can confront existing disinformation and help users navigate the disinformation that’s yet to come.”
It’s an ironic historical moment considering that when the internet was created, and later when social media emerged, both developments carried an underlying presumption of unity. These were tools that would bring people closer together and facilitate cooperation among them.
“But what we see transpiring has resulted in isolation,” says Castillo. “In fact, we have never been as isolated and fragmented as a society than we are today, something that’s driven largely by social media silos.
“If we don’t get a hold of this problem, democracy will collapse, we won’t be able to reverse climate change and we will continue to suffer public health crises, leading to countless preventable deaths.”
If misinformation isn’t addressed, denialism will grow, says Lyu, who reiterates that while ease of communication and an improved flow of information underscored the creation of the internet, its evolved form is flawed.
“All the hope built into that foundation is being undone,” he says. “With CII we have an opportunity to set an example for how crossing the boundaries between STEM and non-STEM fields can have previously unimaginable benefits.
“I’m excited to see how our center develops.”