University at Buffalo - The State University of New York
Skip to Content
UBNow

News and views for UB faculty and staff

Q&A

Ken Regan

Published July 2, 2015

By MICHAEL FLATT

Reprinted from AtBuffalo

When the chess world suspects someone of having cheated in a tournament, Ken Regan, UB associate professor of computer science, is the guy who gets the call. Using a database of tens of thousands of top-level games, Regan, himself an international chess master, has devised a program that can help determine whether a player is playing like a human or like a computer.

Why do people come to you when they think someone has broken the rules?

KR: They come to me because I’m the only one yet who has a scientifically rigorous and vetted model of determining whether the frequency of agreement with a computer — which is to say the cognitive style of a game — is nonhuman. Humans blunder and don’t consistently make the best move available. Players using chess programs will usually use the best available move.

How does someone go about cheating in a game of chess? Does it usually involve hiding a smartphone?

KR: Smartphones are a main culprit, but they’re not necessarily the only means. People have hidden computers in their shoe, as in the famous John von Neumann case at the World Open. There have been people caught in bathrooms looking at handheld computers. There was also alleged to be a case where a code was used to transmit moves from Paris to Russia by text messages, which were then conveyed to the player by having people in the audience move between seats corresponding to squares on the board.

Could somebody find a way to use a computer in a manner that you couldn’t detect?

KR: Well, that’s the beauty. I don’t care how the moves were procured. All you do is send me the moves. I have my own analyzer, which is what sets the probabilities. So I actually don’t care how they were obtained.

What implications might your research have for artificial intelligence research?

KR: Former world chess champion Garry Kasparov has made the point that my program performs a kind of inverse Turing test. The Turing test is, “Can you program a computer to play like a human so that a person looking at the games cannot tell it’s a computer?” I think you could use my model to generate some fairly convincingly “human” games.