What Is Artificial General Intelligence (AGI)?
Artificial general intelligence (AGI) is a branch of theoretical artificial intelligence (AI) research working to develop AI with a human level of cognitive function, including the ability to self-teach. However, not all AI researchers believe that it is even possible to develop an AGI system, and the field is divided on what factors comprise and can accurately measure “intelligence.”
Other terms for AGI include robust AI or general AI. These theoretical forms of AI stand in contrast to weak AI, or narrow AI, which are able to perform only specific or specialized duties within a predefined set of parameters. AGI would be able to autonomously tackle a variety of intricate problems across various domains of knowledge.
How Artificial General Intelligence (AGI) Works
Given that AGI remains a theoretical concept, opinions differ as to how it might ultimately be realized. According to AI researchers Ben Goertzel and Cassio Pennachin, “‘general intelligence’ does not mean exactly the same thing to all researchers.” However, “loosely speaking,” AGI refers to “AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation.”
Because of the nebulous and evolving nature of both AI research and the concept of AGI, there are various theoretical approaches to how it could be created. Some of these include techniques such as neural networks and deep learning, while other methods propose constructing large-scale simulations of the human brain using computational neurobiology.
Artificial General Intelligence (AGI) vs. Artificial Intelligence (AI)
-
While artificial intelligence (AI) currently encompasses a vast range of technologies and research avenues that deal with machine and computer cognition, artificial general intelligence (AGI), or AI with a level of intelligence equal to that of a human, remains a theoretical concept and research goal.
-
AI researcher Peter Voss defines general intelligence as having “the ability to learn anything (in principle).” Under his criteria, AGI’s learning ability would need to be “autonomous, goal-directed, and highly adaptive.” AGI is generally conceptualized as being AI that has the ability to match the cognitive capacity of humans, and is categorized under the label of strong AI. (Artificial super intelligence [ASI] also falls under the strong AI category; however, it alludes to the concept of AI that transcends the function of the human brain.)
-
In comparison, most of the AI available at this stage would be categorized as limited AI, or narrow AI, as it has been developed to concentrate on specific tasks and applications. It’s worth noting, however, that these AI systems can still be immensely potent and complex, with applications spanning from autonomous vehicle systems to voice-activated virtual assistants; they solely rely on some level of human programming for training and accuracy.
Examples of Artificial General Intelligence (AGI)
Because AGI remains a developing concept and discipline, it is debatable whether any current examples of AGI exist. Researchers from Microsoft, in tandem with OpenAI, claim that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” This is due to its “mastery of language” and its ability to “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting” with capabilities that are “strikingly close to human-level performance.”8 However, Sam Altman, CEO of ChatGPT, says that ChatGPT is not even close to an AGI model.
In the future, examples of AGI applications might include advanced chatbots and autonomous vehicles, both domains in which a high level of reasoning and autonomous decision making would be required.
Types of Artificial General Intelligence (AGI) Research
Computer scientists and artificial intelligence researchers continue to develop theoretical frameworks and work on the intractable problem of AGI. Goertzel has defined several high-level approaches that have emerged in the field of AGI research and categorizes them as follows:
-
Symbolic: A symbolic approach to AGI holds the belief that symbolic thought is “the crux of human general intelligence” and “precisely what lets us generalize most broadly.”
-
Emergentist: An emergentist approach to AGI concentrates on the concept that the human brain is essentially a set of basic elements (neurons) that self-organize complexly in reaction to the experience of the body. In turn, it might follow that a similar form of intellect might emerge from re-creating a similar structure.
-
Hybrid: As the name suggests, a hybrid approach to AGI views the brain as a hybrid system in which many distinct elements and principles work together to create something in which the whole is greater than the sum of its parts. By nature, hybrid AGI research varies considerably in its approaches.
-
Universalist: A universalist approach to AGI concentrates on “the mathematical essence of general intelligence” and the notion that once AGI is solved in the theoretical sphere, the principles used to solve it can be scaled down and used to construct it in reality.
Read Also: Evolution of the Technology Sector
The Future of Artificial General Intelligence (AGI)
The year when we will be able to attain AGI (or whether we will even be able to create it at all) is a topic of much debate. Several notable computer scientists and entrepreneurs believe that AGI will be created within the next few decades:
-
Louis Rosenberg, CEO and principal scientist of Unanimous AI, predicted in 2020 that AGI would be attained by 2030.1415 Ray Kurzweil, Google’s director of engineering and a pioneer of pattern recognition technology, predicts that AI will attain “human levels of intelligence” in 2029 and surpass human intelligence by 2045.1617 Jürgen Schmidhuber, co-founder and principal scientist at NNAISENSE and director of Swiss AI center IDSIA, estimates AGI by around 2050.
-
However, the future of AGI remains an open-ended question and an ongoing research pursuit, with some scholars even arguing that AGI cannot and will never be realized. AI researcher Goertzel has explained that it’s difficult to objectively measure the progress toward AGI, as “there are many different routes to AGI, involving integration of different sorts of subsystems” and there is no “thorough and systematic theory of AGI.” Rather, it’s a “patchwork of overlapping concepts, frameworks, and hypotheses” that are “often synergistic and sometimes mutually contradictory.”
-
In an interview on the topic of AGI’s future, Sara Hooker of research facility Cohere for AI said, “It really is a philosophical question. So, in some respects, it’s a very challenging time to be in this profession, because we’re a scientific discipline.”
What Is an Example of Artificial General Intelligence (AGI)?
Researchers from Microsoft and OpenAI assert that GPT-4 could be an early but incomplete example of AGI. As AGI has not yet been entirely accomplished, future examples of its application might include situations that necessitate a high level of cognitive function, such as autonomous vehicle systems and advanced chatbots.89
How Far Off Is Artificial General Intelligence (AGI)?
Because artificial general intelligence (AGI) is still a theoretical concept, estimations as to when it might be realized diverge. Some AI researchers believe that it is impossible, while others assert that it is only a matter of decades before AGI becomes a reality.