The ultimate goal of current artificial intelligence companies is to achieve a level of intelligence equivalent to that of humans, resulting in what is referred to as general artificial intelligence (AGI). At this stage, AI would be capable of learning and thinking like a person. It would, among other things, possess self-awareness (though not at the level of human consciousness) and an understanding of its environment, capable of performing a wide range of intellectual tasks.
The pursuit of AGI has sparked numerous debates within the scientific community. Some experts believe that achieving such a form of artificial intelligence is nearly impossible, while others think we are already on the right path to achieving it. Demis Hassabis, CEO of Google DeepMind, believes in the future of AI and has expressed concerns about this technology. Recently, Shane Legg, co-founder of the same AI unit at Google, spoke about AGI in an interview with podcaster Dwarkesh Patel and suggested that this technology could be developed by 2028.
A 50% Chance?
For Shane Legg, the possibility of achieving AGI by 2028 is a conviction he has held for a long time. In 2011, he mentioned this as a possibility, provided that “nothing crazy happens, like a nuclear war.” Recently, he confirmed that he still maintains this estimate, though he acknowledges that the chances of it happening are 50%, according to his assessment. “I think it’s entirely plausible, but I wouldn’t be surprised if it didn’t happen by then,” he said in an interview with Futurism.
In any case, Legg believes that the available computing power is already sufficient to achieve AGI. In other words, we have the necessary computational resources to develop the technology. According to him, the next step is to create AI training models capable of handling much larger amounts of data than current models and more data than a human can experience in their lifetime. In fact, during the interview with the podcaster Dwarkesh, he even specified that, in his view, the AI industry is already prepared to take on this challenge.
However, even if Legg believes that AGI could be achieved in the coming years, he acknowledges that there are major obstacles. One of these is the difficulty of defining and testing human intelligence due to its complexity. Furthermore, he points out that the definition of AGI depends on our understanding of human intelligence, and it is difficult to develop precise tests to measure it as it encompasses a wide range of abilities. “You will never have a complete list of everything people can do,” he stated.
Another obstacle he mentioned concerns the potential difficulties in scaling AI models to a very high level. Just in terms of the amount of energy required, this could be problematic given the colossal electricity consumption of current AI models.