The Supposed Looming Specter of Artificial General Intelligence

Selective focus photo of bulb. Photo by Stefan Cosma on Unsplash.

Published October 18, 2023

The optimism surrounding the promise of artificial intelligence has accompanied related research and development for most of the past century. However, skepticism has continuously served to temper delusions of grandeur that might look past the daunting challenges of developing machine or computer technology that can surpass human abilities in holistic and comprehensive intelligence (often referred to as artificial general intelligence).

For example, back in 1950 Alan Turing’s famous paper, “Computing Machinery and Intelligence,” offered quite an optimistic and positive appraisal of the promise of machine intelligence. However, by 1980 American Philosopher, John Searle, had notably and deeply challenged some of the core theoretical presumptions at the heart of evaluating machine intelligence (Searle, 1980). Nevertheless, in looking at the history and potential trajectory of artificial intelligence research and development, the skeptical voice has tended to be much less prevalent as well as much less effective in tempering the allure, optimism, and fears of those who envision a much less encumbered path. 

Since the turn of the millennium, we have seen a wave of books that have endeavored to present such questioningly speculative visions of the future relative to artificial intelligence in our lives. The most notable of these books have been geared toward the common—or non-expert—reader, and have likewise caused quite a popular stir. Perhaps—like me—you may have read one or more of these books yourself? One example is Ray Kurweil’s The Singularity is Near (2005). In short, Kurtzweil’s bestselling book hypothesized that evolutionary progress and technological progress coincide so as to reliably foreshadow an impending singularity, or moment in which machine or technological intelligence will surpass human intelligence. Similarly, Nick Bostrom’s more recent bestseller, Superintelligence: Paths, Dangers, Strategies (2014), examined the promise of potential efforts to conjure superintelligence beyond that of comprehensive and present human capabilities. In this take, discussion was not limited to machine intelligence but also extended to things like brain-computer interfacing and biological enhancement and eugenics. As the title suggests Bostrom’s was a particularly fearful vision of the future of artificial intelligence research and development.

Nevertheless, two University at Buffalo philosophy department members, SUNY Distinguished Professor, Barry Smith, and Senior Research Associate, Jobst Landgrebe, have recently published Why Machines Will Never Rule the World: Artificial Intelligence Without Fear (2023), which helps in cutting through all of the noise surrounding the trajectory of artificial intelligence research and development. This book is critically insightful relative to a particularly salient point in understanding the challenges of artificially creating intelligence on par with that of humans—that of the limitations of mathematical modeling. Drawing from a multi-disciplinary and comprehensive body of knowledge, Landgrebe and Smith effectively outline just how complex the interface between the brain and nervous system is and, further, the seemingly intractable barriers that we face in mathematically modeling the diverse and complex functions that comprise human intelligence (i.e. language and conversation, social behavior, morality).

What their perspective pulls into focus for us is that, while technological innovation is rapidly accelerating, that does not mean that the same trajectory is true for technological intelligence. In fact, a comprehensive assessment of artificial intelligence research suggests quite the opposite, namely that developing and gluing the countless theoretical and architectural components together that would be needed to create machinery capable of human-level intelligence creates quite a quagmire. As educators, this theoretical point should guide the foreseeable future so as to allow us to navigate the forthcoming educational landscape without unnecessary fears and distractions occupying our thoughts. This will hopefully allow us to focus on the more tangible problems that are sure to disrupt educational spaces.

While scholars may disagree on the promise of the broader quest for artificial general intelligence, the state of artificial intelligence research and development offers little compelling evidence that we are anywhere near the developmental fruition of artificial intelligence rivaling the holistic and dynamic capabilities of human intelligence. Therefore, as new AI and technologies emerge, for the foreseeable future, we must remember that the ghosts in—or behind—the machines will be our students. There has never been a better time to focus on developing the moral compasses of each and every student so that they are adequately capable of ethically operating and utilizing the increasingly potent tools that will be available to them. As the old adage goes: “with great power comes great responsibility.” Today’s students are unique in that they are pioneers in a learning environment that is increasingly calling upon them to safeguard their own development amidst this increasing power and autonomy. How we will cultivate and guide this most sacred of human intelligence is the task before us and remains to be seen.     

References

  1. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press, 2014.
  2. Kurzweil, Ray. The Singularity is Near. New York: Penguin Books, 2005.
  3. Landgrebe, Jobst and Smith, Barry. Why Machines Will Never Rule the World: Artificial Intelligence Without Fear. New York: Routledge, 2023.
  4. Searle, John. “Minds, Brains, and Programs (1980).” In, The Philosophy of Artificial Intelligence. Ed. Margaret A. Boden. New York, Oxford University Press, 1990: 67-88.
  5. Turing, Alan. “Computing Machinery and Intelligence (1950).” In, The Philosophy of Artificial Intelligence. Ed. Margaret A. Boden. New York, Oxford University Press, 1990: 40-66.