AGI coming to humanity in 2028: DeepMind co-founder's lengthy article predicts future AI developments
Definition of AGI
AGI, which stands for Artificial General Intelligence, has various definitions. Generally, it refers to an AI system that can perform common human cognitive tasks and even surpass them. Testing whether AI has reached this threshold requires measuring it using various types of tests that cover a broad range of human cognition. This is challenging because there are too many things humans can do, and these abilities constantly evolve. To determine if an AI system is AGI, it must perform at the same level as humans across all cognition tasks. However, it is not possible to test tasks that cover all human cognitive abilities. Therefore, achieving AGI requires continuous improvements and explorations. In assessing intelligence, a framework exists that applies weights based on the complexity of the task and environment. Using human intelligence as a reference for AI selection is reasonable in many aspects.
The Arrival of AGI
In 2011, in one of his blog posts, Shane Legg made a prediction about the timeline for the arrival of Artificial General Intelligence (AGI):
In 2011, in one of his blog posts, Shane Legg made a prediction about the timeline for the arrival of Artificial General Intelligence (AGI):
"I previously made a prediction about the arrival time of AGI following a log-normal distribution, with 2028 as the mean and 2025 as the mode. I still hold that view, provided there are no crazy events like nuclear wars."
Legg explained that his prediction is based on two crucial points:
Firstly, the computational power of machines will exponentially increase in the coming decades, along with the exponential growth of global data. When computing power and data both grow exponentially, the value of highly scalable algorithms continuously improves, as these algorithms can effectively utilize computation and data.
Secondly, with the discovery of scalable algorithms and the training of models, the future scale of data for models will far surpass the amount of data a human experiences in their lifetime.
Shane Legg believes that this will be the first step towards unlocking AGI. Therefore, he believes there is a 50% chance of achieving AGI before 2028. However, there may be unforeseen challenges at that time as well.
But in Legg's view, all the problems we currently face are expected to be resolved in the coming years.
Our existing models will become more refined, realistic, and timely.
Multimodality will be the future of models, making them more useful.
However, just like the two sides of a coin, models may also be susceptible to misuse.
But in Legg's view, all the problems we currently face are expected to be resolved in the coming years.
Our existing models will become more refined, realistic, and timely.
Multimodality will be the future of models, making them more useful.
However, just like the two sides of a coin, models may also be susceptible to misuse.
The Future of Multimodality
Lastly, Shane Legg mentions that the next milestone in the field of AI will be multimodal models.
Multimodal technology will expand the understanding capabilities of language models into broader domains.
When future generations look back on the models we have now, they might think, "Wow, those models were just chatbots that could only handle text."
Multimodal models can comprehend images, videos, and sound, allowing for a deeper understanding of what is happening when we interact with them.
It will feel like the system is truly embedded in the real world.
As models start to process large amounts of video and other content, they will gain a more fundamental understanding of the world and various implicit knowledge.
Lastly, Shane Legg mentions that the next milestone in the field of AI will be multimodal models.
Multimodal technology will expand the understanding capabilities of language models into broader domains.
When future generations look back on the models we have now, they might think, "Wow, those models were just chatbots that could only handle text."
Multimodal models can comprehend images, videos, and sound, allowing for a deeper understanding of what is happening when we interact with them.
It will feel like the system is truly embedded in the real world.
As models start to process large amounts of video and other content, they will gain a more fundamental understanding of the world and various implicit knowledge.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.
Read more
Comment
Sign in to post a comment
Pepperoni Pete : End of the free world as we know it
ehttrader OP Pepperoni Pete : Maybe there's no need to be so pessimistic.
Pepperoni Pete ehttrader OP : I'll let you know how that goes.