Whether we realize it or not, most of us practice artificial intelligence (AI) every day. Every time you do a Google search or ask a question about Siri, you use AI. The trick, however, is that the intelligence that these tools provide is not really intelligent. They really don’t think and understand the way people do. Rather, they analyze massive datasets for models and correlations.
This is not to take anything away from AI. As Google, Siri and hundreds of other tools demonstrate on a daily basis, the current AI is incredibly useful. But in the end, there is not much intelligence. Today’s AI only gives the appearance of intelligence. There is a lack of true understanding or consciousness.
In order for today’s AI to overcome its inherent limitations and develop into its next phase – defined as artificial general intelligence (AGI) – it must be able to understand or learn any intellectual task that one can. This will allow him to constantly grow in his intelligence and abilities in the same way that a human three-year-old grows up to possess the intelligence of a four-year-old and ultimately a 10-year-old, a 20-year-old. -old, etc.
The true future of AI
AGI represents the true future of AI technology, a fact that has not escaped many companies, including names such as Google, Microsoft, Facebook, Elon Musk’s OpenAI and Kurzweil-inspired Singularity.net. The research that all these companies do depends on an intelligent model that has varying degrees of specificity and relies on today’s artificial intelligence algorithms. Surprisingly, however, none of these companies has focused on developing basic, underlying AGI technology that reproduces people’s contextual understanding.
What will it take to get to AGI? How will we give computers an understanding of time and space?
The main limitation of all the research that is currently being conducted is that it is unable to understand that words and images are physical things that exist and interact in the physical universe. Today’s AI cannot understand the concept of time and causes have an effect. These basic problems have not yet been resolved, perhaps because it is difficult to obtain large amounts of funding to solve problems that any three-year-old child can solve. We humans are great at merging information from multiple senses. A three-year-old child will use all his senses to learn how to arrange blocks. The child learns time by experiencing it, interacting with toys and the real world in which the child lives.
Similarly, AGI will need sensory pods to learn such things, at least in the beginning. Computers should not be located in the pods, but can be connected remotely, as electronic signals are significantly faster than those in the human nervous system. But the pods provide an opportunity to learn first-hand about arranging blocks, moving objects, performing sequences of actions over time, and learning about the consequences of those actions. With sight, hearing, touch, manipulators, etc., AGI can learn to understand in ways that are simply impossible for a purely textual or entirely image-based system. Once AGI acquires this understanding, sensor pods may no longer be needed.
The costs and risks of AGI
At this stage, we cannot quantify the amount of data that may be needed to present a true understanding. We can only look at the human brain and assume that some reasonable percentage of it must relate to understanding. We humans interpret everything in the context of everything else we have already learned. This means that as adults we interpret everything in the context of the true understanding we have acquired in the first years of life. Only when the AI community takes unprofitable steps to recognize this fact and master the fundamental foundation of intelligence will AGI be able to emerge.
The AI community must also take into account the potential risks that could accompany the achievement of the AGI. AGIs are definitely goal-oriented systems that will inevitably go beyond all the goals we have set for them. At least initially, these goals can be set for the benefit of humanity, and AGI will provide enormous benefits. However, if AGIs are armed, they are likely to be effective in this area as well. The worry here is not so much Terminatorindividual robots in the style of AGI mind, which is able to strategize even more destructive methods of controlling humanity.
A total ban on AGI would simply shift development to countries and organizations that refuse to recognize the ban. Adopting the AGI for free for all is likely to lead to malicious people and organizations that want to harness the AGI for disastrous purposes.
How soon could all this happen? Although there is no consensus, AGI may be here soon. Note that a very small percentage of the human genome (which amounts to approximately 750MB of information) determines the entire structure of the brain. This means that developing a program containing less than 75MB of information can fully represent the brain of a newborn with human potential. When you realize that a seemingly complex project for the human genome has been completed much earlier than one realistically expected, emulating the brain in software in the near future should be within the scope of the development team.
Similarly, a breakthrough in neuroscience at any time can lead to the mapping of the human neuroma. Finally, the human neuroma project is already in the works. If this project is progressing as fast as the human genome project, it is fair to conclude that AGI may appear in the very near future.
Although the timing may be uncertain, it is quite safe to assume that AGI is likely to emerge gradually. This means that Alexa, Siri or Google Assistant, all of whom are already better at answering questions than the average three-year-old, will eventually be better than a 10-year-old, then an average adult, then a genius. . As the benefits of each progression outweigh any perceived risks, we may disagree on the point at which the system crosses the line of human equivalence, but we will continue to assess – and expect – every level of progress.
The enormous technological efforts at AGI, combined with rapid advances in horsepower calculations and ongoing breakthroughs in neuroscience and brain mapping, suggest that AGI will emerge in the next decade. This means that systems with unimaginable mental strength are inevitable in the coming decades, whether we are ready or not. Given this, we need an open discussion about AGI and the goals we would like to achieve in order to reap its maximum benefits and avoid any possible risks.
Charles Simon, BSEE, MSCS is a nationally recognized developer and software developer and CEO of FutureAI. Simon is the author of Will computers rebel? Preparing for the future of artificial intelligence, and the developer of Brain Simulator II, a research software platform for AGI. For more information visit https://futureai.guru/Founder.aspx.
The New Tech Forum provides a place to explore and discuss emerging corporate technologies in unprecedented depth and breadth. The choice is subjective, based on our choice of technologies that we consider important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing guarantees for publication and reserves the right to edit all content provided. Send all inquiries to email@example.com.
Copyright © 2022 IDG Communications, Inc.