Routledge, a leading publisher that champions the knowledge-maker, has announced the publication of a book by co-authors Jobst Landgrebe and Barry Smith. The book, Why Machines Will Never Rule The World — Artificial Intelligence Without Fear (Routledge 2022), presents the core argument that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible.
The book offers two specific reasons for this claim:
1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.
In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory?
Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error. There is still, as they demonstrate in a final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, which are as impossible as AI systems that are intrinsically "evil" or able to "will" a takeover of human society. Read more.
"It’s a highly impressive piece of work that makes a new and vital contribution to the literature on AI and AGI. The rigor and depth with which the authors make their case is compelling, and the range of disciplinary and scientific knowledge they draw upon is particularly remarkable and truly novel."
Shannon Vallor, Edinburgh Futures Institute, The University of Edinburgh
Release Date: August 22, 2022
BUFFALO, N.Y. – Elon Musk in 2020 said that artificial intelligence (AI) within five years would surpass human intelligence on its way to becoming “an immortal dictator” over humanity. But a new book co-written by a University at Buffalo philosophy professor argues that won’t happen – not by 2025, not ever!
Barry Smith, PhD, SUNY Distinguished Professor in the Department of Philosophy in UB’s College of Arts and Sciences, and Jobst Landgrebe, PhD, founder of Cognotekt, a German AI company, have co-authored “Why Machines Will Never Rule the World: Artificial Intelligence without Fear.”
Their book presents a powerful argument against the possibility of engineering machines that can surpass human intelligence.
Machine learning and all other working software applications − the proud accomplishments of those involved in AI research − are for Smith and Landgrebe far from anything resembling the capacity of humans. Further, they argue that any incremental progress that’s unfolding in the field of AI research will in practical terms bring it no closer to the full functioning possibility of the human brain.
Smith and Landgrebe offer a critical examination of AI’s unjustifiable projections, such as machines detaching themselves from humanity, self-replicating, and becoming “full ethical agents.” There cannot be a machine will, they say. Every single AI application rests on the intentions of human beings – including intentions to produce random outputs.
This means the Singularity, a point when AI becomes uncontrollable and irreversible (like a Skynet moment from the “Terminator” movie franchise) is not going to occur. Wild claims to the contrary serve only to inflate AI’s potential and distort public understanding of the technology’s nature, possibilities and limits.
Reaching across the borders of several scientific disciplines, Smith and Landgrebe argue that the idea of a general artificial intelligence (AGI) − the ability of computers to emulate and go beyond the general intelligence of humans − rests on fundamental mathematical impossibilities that are analogous in physics to the impossibility of building a perpetual motion machine. AI that would match the general intelligence of humans is impossible because of the mathematical limits on what can be modelled and is “computable.” These limits are accepted by practically everyone working in the field; yet they have thus far failed to appreciate their consequences for what an AI can achieve.
“To overcome these barriers would require a revolution in mathematics that would be of greater significance than the invention of the calculus by Newton and Leibniz more than 350 years ago,” says Smith, one of the world’s most cited contemporary philosophers. “We are not holding our breath.”
Landgrebe points out that, “As can be verified by talking to mathematicians and physicists working at the limits of their respective disciplines, there is nothing even on the horizon which would suggest that a revolution of this sort might one day be achievable. Mathematics cannot fully model the behaviors of complex systems like the human organism,” he says.
AI has many highly impressive success stories, and considerable funding has been dedicated toward advancing its frontier beyond the achievements in narrow, well-defined fields such as text translation and image recognition. Much of the investment to push the technology forward into areas requiring the machine counterpart of general intelligence may, the authors say, be money down the drain.
“The text generator GPT-3 has shown itself capable of producing different sorts of convincing outputs across many divergent fields,” says Smith. “Unfortunately, its users soon recognize that mixed in with these outputs there are also embarrassing errors, so that the convincing outputs themselves began to appear as nothing more than clever parlor tricks.”
AI’s role in sequencing the human genome led to suggestions for how it might help find cures for many human diseases; yet, after 20 years of additional research (in which both Smith and Landgrebe have participated), little has been produced to support optimism of this sort.
“In certain completely rule-determined confined settings, machine learning can be used to create algorithms that outperform humans,” says Smith. “But this does not mean that they can ‘discover’ the rules governing just any activity taking place in an open environment, which is what the human brain achieves every day.”
Technology skeptics do not, of course, have a perfect record. They’ve been wrong in regard to breakthroughs ranging from space flight to nanotechnology. But Smith and Landgrebe say their arguments are based on the mathematical implications of the theory of complex systems. For mathematical reasons, AI cannot mimic the way the human brain functions. In fact, the authors say that it’s impossible to engineer a machine that would rival the cognitive performance of a crow.
“An AGI is impossible,” says Smith. “As our book shows, there can be no general artificial intelligence because it is beyond the boundary of what is even in principle achievable by means of a machine.”
News Content Manager
Humanities, Economics, Social Sciences, Social Work, Libraries
Applied Ontology; Artificial Intelligence
Buffalo NY, 14260-4150
Phone: (716) 645-0160
Fax: (716) 645-6139
PhD, Philosophy, University of Manchester (1976)
MA, Mathematics and Philosophy, Oxford University (1975)
BA, Mathematics and Philosophy, Oxford University, 1st Class Joint Honors (1973)