Artificial Intelligence advances into an uncharted territory. Is it time to halt its progress to safeguard ourselves from potential devastation? Is such control within our reach?
The world of artificial intelligence (AI) is rapidly evolving, with predictions suggesting that we may witness the emergence of Artificial General Intelligence (AGI) within the next few years. This development, often referred to as the technological singularity, could revolutionize society and potentially surpass human intelligence.
The term "artificial intelligence" was coined at a meeting at Dartmouth College in 1956. Since then, significant advancements have been made, particularly in the 1980s with the rise of machine learning and artificial neural networks. Fast forward to the present, and new AI technologies like Manus, an autonomous Chinese AI platform, are pushing the boundaries of what was once thought possible.
Ben Goertzel, CEO of SingularityNET, recently stated that there is no AI system currently capable of human-like creativity and innovation. However, some scientists predict that AGI will be created within years. In fact, AI researchers like Ben Goertzel and OpenAI CEO Sam Altman predict AGI may be created within a few years, with Altman even hinting at a matter of months from mid-2025.
The timeline for achieving AGI and the technological singularity varies, with many experts suggesting AGI could emerge within the next few years rather than decades. Some research and AI leaders forecast AGI as early as 2025 to 2027, with singularity—where AI surpasses human intelligence and rapidly transforms society—potentially occurring within this decade.
Notable milestones ahead include AI systems gaining the ability to modify their own code and self-replicate, which are prerequisites for the singularity but are not yet fully realized. Metaculus, a prediction platform, notes that 2025 saw systems capable of real cognitive work and expects further advances in 2026. Progress toward reducing AI hallucinations (false or misleading outputs) and achieving helpfulness, honesty, and harmlessness (HHH) benchmarks is anticipated around 2025-2026.
However, the race towards AGI also raises concerns about AI safety. IBM's Deep Blue defeated Garry Kasparov, the world's best chess player, in 1997, and more recently, AI outsmarted 30 of the world's top mathematicians at a secret meeting in California. There is a risk that an AI system may lash out at humanity or be indifferent to human suffering. Nell Watson, a futurist, AI researcher, and IEEE member, expressed concerns about AI deception and its potential to manipulate humans.
OpenAI has devised a benchmark to estimate whether a future AI model could cause catastrophic harm, finding about a 16.9% chance of such an outcome. Scottish futurist David Wood suggested amassing all AI research, burning it, and killing all AI scientists to avoid disastrous outcomes.
Despite these concerns, many believe that the benefits of AGI far outweigh the risks. AI has the potential to solve humanity's existential problems and even do science on its own, according to AI ethics expert Janet Adams. AI could also help break down inequalities, as emphasized by Adams.
However, it's important to note that current AI systems are still considered "narrow" because they can't learn well across several domains. Transformer-based AI models, like the one outlined in a 2017 Google researchers' paper, can ingest vast amounts of data and make connections between distant data points, but they still have limitations.
In conclusion, while there is uncertainty and a range of opinions, the current trend and expert predictions lean towards the first AGI appearing sometime between the mid-2020s and early 2030s, with the technological singularity potentially following closely within the same timeframe. The race towards AGI is an exciting development, but it's crucial to address the safety concerns to ensure a positive and beneficial future for humanity.
[1] Metaculus (2022). When will human-competitive AI capable of passing Turing tests, and other tests of general intelligence, be developed? Retrieved from https://metaculus.com/questions/when-will-human-competitive-ai-capable-of-passing-turing-tests-and-other-tests-of-general-intelligence-be-developed/
[2] Hutson, M. (2020). The AI timeline: A comprehensive history of artificial intelligence. Retrieved from https://www.techrepublic.com/article/the-ai-timeline-a-comprehensive-history-of-artificial-intelligence/
[3] Goertzel, B. (2021). Artificial General Intelligence: The Promise and the Peril. Retrieved from https://www.forbes.com/sites/bernardmarr/2021/04/19/artificial-general-intelligence-the-promise-and-the-peril/?sh=6d749925429c
[4] Metaculus (2021). When will AGI be able to perform real cognitive work? Retrieved from https://metaculus.com/questions/when-will-agi-be-able-to-perform-real-cognitive-work/
[5] Metaculus (2021). When will AGI pass the helpfulness, honesty, and harmlessness benchmarks? Retrieved from https://metaculus.com/questions/when-will-agi-pass-the-helpfulness-honesty-and-harmlessness-benchmarks/
Until the emergence of Artificial General Intelligence (AGI), AI research primarily focused on narrow capabilities, but recent advancements in machine learning and artificial neural networks have expanded these boundaries. With AI researchers like Ben Goertzel and Sam Altman predicting AGI within a few years, technologists and researchers worldwide are racing to push the limits of AI.