Legal scholar advocates for stronger protections against cyber scams

Karla Lellis sits at a desk facing the camera.

Karla Lellis

Karla Lellis says social media, AI and other digital tech can leave people vulnerable to harm

Release Date: October 17, 2025

Print
“Across the United States, people face growing breach and AI-driven harms – fraud, deepfakes and sextortion. So we need clear guidance on prevention, evidence and fast redress. ”
Karla Lellis, undergraduate lecturer
University at Buffalo School of Law

BUFFALO, N.Y. – With extensive expertise in cybersecurity data breaches and the malicious use of artificial intelligence, University at Buffalo legal scholar Karla Lellis has come up with solutions for her students, unsuspecting agencies and others vulnerable to the risks and enduring damage.

“Across the United States, people face growing breach and AI-driven harms – fraud, deepfakes and sextortion,” says Lellis, who joined the UB School of Law this fall as an undergraduate lecturer in law. “So we need clear guidance on prevention, evidence and fast redress.”

Lellis researches how legal systems in the United States, Brazil and the European Union recognize “harm” after data breaches, and how damages are assessed. Courts in Brazil and EU recognize psychological harm as grounds to award compensation data breach and award compensation. U.S. rules on standing doctrine and damages are narrower and differ significantly.

Recognizing personal data protection is a fundamental right, according to Lellis. But proving psychological injury remains difficult due to the subjective nature of mental injury and the variation in legal standards across different jurisdictions.

“The core question is how do these jurisdictions define and recognize harm in cybersecurity class actions and the downstream legal, economic and policy effects,” says Lellis. That includes determining who has standing, what evidence courts require, how damages are calculated and whether remedies actually deter future breaches.

Put simply: Definitions decide who gets justice.

Lellis defines data breaches as “the unauthorized access, acquisition or disclosure of protected information that compromises its confidentiality, integrity or availability.” These threats have become increasingly common, particularly on social media such as Instagram, Facebook, and TikTok. They also pose what Lellis calls “outsized risks” to those with limited resources – small legal or health clinics, schools, nonprofits and other vulnerable agencies.

Lellis says those most at risk are often women and children who are often the targets of harmful AI tools (deepfakes, voice cloning, synthetic imagery) and sextortion – a form of blackmail that threatens the release of sexual photos, videos or other content.

This often triggers dire consequences for victims including financial damage, injury to reputations, illegal trafficking and reuse of a victim’s images and data. The harmful effects include psychological injury and future harm, meaning the continuing risk that leaked information will resurface for identity theft, stalking, renewed extortion, or deepfake abuse months or even years after the initial breach.

Lellis’ publications include “Engaging the Social Media Generation: Modern Approaches to Teaching Cybersecurity Law,” published in CIO Review. This and other work from Lellis maps the chain of how leaked personal information (such as phone numbers or images) can trigger automated targeting, online grooming and ultimately coercion when combined with AI-generated media.

“I analyze how courts might handle proof of causation, which remedies are most suitable – such as injunctive relief, rapid takedown of manipulated content and authenticity or traceability requirements for digital media,” says Lellis, “and why collective actions or class proceedings may be the only scalable way to ensure redress for widespread digital harm.”

Lellis’ expertise has direct implications to UB and agencies in Western New York. She integrates a “law + tech policy lab” model where students build practical toolkits (impact-assessment checklists, harm measurement matrices, incident response templates) that local organizations can use. Lellis has developed a message for her students to help counter cybersecurity data breaches and harmful AI.

“Measure the harm and its long tail risks, align liability,” she says. “And build the cost of data breaches into law and market systems to drive deterrence. When prevention fails, we need fast, fair redress.

“Good governance turns technical risk into legal accountability though clear standards, rules of evidence and enforceable remedies,” she says.

Media Contact Information

Charles Anzalone
News Content Manager
Educational Opportunity Center, Law,
Nursing, Honors College, Student Activities

Tel: 716-645-4600
anzalon@buffalo.edu