AI Today and Tomorrow Series #2: Artificial General Intelligence

(Source: ArtemisDiana/Shutterstock)
In response to readers’ questions about AI, BigDATAwire asked me to write a series of columns on AI, a topic that is generating excitement and concern in the worldwide HPC community and beyond. The introductory column appeared in January. As I said in the intro column, the series doesn’t aim to be definitive. The goal is to lay out a range of current information and opinions on AI for the HPC-AI community to consider. No one has the final take on AI, so I also encourage comments and suggestions at steve@intersect360.com.
AI Evolutionary Phases
I thought it would make sense to put AI in perspective early on, by devoting this column to the aspirational goal of artificial general intelligence. AGI is generally seen as the middle phase in a three-phase AI evolution:
- In today’s narrow, or weak, phase, AI is already very useful but largely dependent on humans to define the goal, e.g., “draft a 10-page report on the status of realistic synthetic data in AI,” or “scan this MRI to identify microcalcifications indicative of early breast cancer, and add resolution to the image for human observers.” The mainstream AI market is heavily exploiting early AI for narrow tasks that mimic a single, isolated human ability, especially visual or auditory understanding, for everything from Siri and Alexa to reading diagnostic images with superhuman ability. But for all its broad utility, narrow AI generally can’t do things it hasn’t been trained to do. Narrow AI is largely confined to “path problems,” using LLMs or non-LLM models to reach human-defined goals in a stepwise fashion.
- AGI is the second phase and inaugurates so-called strong AI, where machines are able to “think” and act more independently. AGI can begin tackling so-called insight problems, where humans are unable or unwilling to define the goal and the AI machine ventures beyond the limits of its trained model to pursue innovative, even breakthrough, solutions. In this phase, AI machines are versatile experiential learners and can be trusted to make difficult decisions in real time, including life-and-death decisions in medicine and driving situations.
- The third phase, super intelligence, would enable AI devices to surpass the limits of human thinking, some would say in a continuing pattern of self-actuated advances. This is where things could get really scary, say, if the computer or other AI device decides it no longer needs us and severs the HMI, the human-machine interface. But superintelligence, if it happens, presumably lies much farther in the future.
Will AGI Happen?
I think most people would agree AGI isn’t here yet. Despite its broad applicability, today’s generative AI remains in the weak AI phase. A few years ago, I led a study that asked more than 50 AI experts around the world if and when AGI will happen. The sizeable majority who believed AGI will happen (some didn’t) said, on average, it would take 87 years. I expect recent generative AI advances would shorten the average estimate, but few AI leaders we talk with today see AGI happening in the next 20 years.
One non-technical challenge is that there is no strong consensus definition of AGI, though the Turing test is a convenient placeholder. In fact, there is no strong consensus definition of intelligence itself (human or other). We don’t know enough yet about how we humans or our fellow creatures think. So, it’s probably useful to point out, as many have, that the AGI goal should be to create thinking machines that can do things usually reserved for humans, without stipulating that the machines must do them in the same way humans do.
Other Challenges En Route to AGI
Things are definitely moving forward, thanks in no small part to researchers advancing AI practices in the worldwide HPC community, but there are important challenges being worked on. They include making the operations of multilayered neural networks explainable and trustworthy (though without a firm consensus definition of explainability yet), ramping up the availability of realistic synthetic data to address the sometimes serious shortage of usable, high-quality data in some domains (e.g., deidentified, HIPAA-compliant patient data in healthcare), and advancing multimodal AI that can concurrently mimic more than one human sense.
AGI and the Mind-Body Debate
The sometimes arcane schools of thought on what’s needed to achieve AGI reflect the mind-body debate that has occupied philosophers since the time of Plato. Are mind and body separate things as Descartes argued, or is that not true?
At one extreme are so-called computationalists, who believe continual technological progress alone—such as replicating the structure of the human brain and sensory apparatus in detail, from neural networks upward—will be adequate for achieving AGI. Continual progress might require some additions, such as developing sophisticated sensors that enable AI devices to directly experience the natural world—think self-driving cars—and heuristics that allow the devices to move beyond logic to address everyday situations the way humans do, with quick solutions that kind of, sort of work most of the time.
Extreme computationalists say that these digital replicas, if sufficiently detailed, will experience the same range of emotions as humans, including happiness, sadness, frustration, and the rest. Form equals function. In any case, these folks think AGI will arise spontaneously once the right components have been assembled in the right way. Mind is not something separate from the world of physical things, they argue.
Not surprisingly, others think differently about the road to AGI. Those in the tradition of Descartes believe that mind exists separately from physical things, and harnessing mind or consciousness for AI devices will be extremely difficult, maybe impossible. And a subset of so-called panpsychists believe mind is an innate property of the universe, right down to individual elements, and should be applicable to AGI for that reason.
Last Word
So much for a brief discussion of AGI. The philosophical debate around AGI is far from the workaday world of today’s AI, but serves to reinforce that AGI isn’t on the near horizon yet—and reaching that aspirational phase may require some unexpected turns. Again, comments are welcome at steve@intersect360.com.
BigDATAwire contributing editor Steve Conway’s day job is a senior analyst with Intersect360 Research. Steve has closely tracked AI developments for over a decade, leading HPC and AI studies for government agencies around the world, co-authoring with Johns Hopkins University Advanced Physics Laboratory (JHUAPL) an AI primer for senior U.S. military leaders and speaking frequently on AI and related topics.