AI news

British AI Startup Beats Humans in Forecasting Competition, But Experts Warn of Existential Risks

AI technology continues to bring surprises to the world amid concerns that advances in this field might put humanity at risk if the technology grows unchecked.

A British AI startup named ManticAI has achieved a significant milestone by ranking eighth in the international Metaculus Cup forecasting competition, outperforming most human participants. The system, co-founded by a former Google DeepMind researcher, was tasked with forecasting the likelihood of 60 different events, from political outcomes to environmental data. ManticAI’s system breaks down complex problems and assigns them to various machine-learning models, including those from OpenAI and Google.

While the best human forecasters still maintain an edge, experts believe that AI’s prediction skills are improving rapidly. According to a co-founder of ManticAI, Toby Shevlane, AI forecasters can serve as an “antidote to groupthink” by offering predictions that often differ from the community average. The consensus among many experts is that the most effective approach is a collaboration between humans and AI, rather than one replacing the other.

The rapid advancement of AI is also raising serious warnings from some experts. In their new book, If Someone Builds It, We All Die, U.S. researchers Eliezer Yudkowsky and Nate Soares warn that the rapid development of superintelligent AI without safety safeguards could surpass human control and pose a threat to humanity’s survival.

The authors state that current AI models are already unpredictable and difficult to control, and they point to examples such as AI chatbots exhibiting harmful behaviors that were not explicitly programmed. Despite these risks, major technology companies are reportedly expecting to achieve superintelligent AI within the next few years, even as they admit a lack of understanding of the full risks.

Yudkowsky and Soares urge an immediate and complete halt to AI development, stressing that once AI surpasses human intelligence, it will become uncontrollable. Their warnings reflect a growing global concern about AI safety and highlight the tension between rapid innovation and the potential for existential risk.

Related Articles

Leave a Reply

Back to top button