AI news

Open letter warns of risks, calls for global halt to Artificial Superintelligence research

Open letter warns of risks, calls for global halt to Artificial Superintelligence research
——————————————-
The Future of Life Institute has called for a ban Wednesday on research related to Artificial Superintelligence (ASI) until a scientific consensus is reached confirming that the technologies are safe and controllable, according to Anadolu Agency.

A letter by the US-based nonprofit organization addressed the risks posed by AI systems that could potentially surpass human intelligence in all areas.

While it acknowledged that innovative AI tools could enhance health and well-being, it warned that ASI could one day pose a threat to humanity.

ASI refers to a hypothetical form of artificial intelligence that would surpass human cognitive abilities across all domains, including creativity, problem-solving and decision-making.

The open letter stated: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

The declaration was signed by 865 individuals, including Nobel laureates, artists, politicians, business leaders and members of the British royal family.

Founded in 2014, the Future of Life Institute focuses on promoting the safe and ethical development of artificial intelligence. The group previously drew global attention in 2023 when it urged a temporary pause in training advanced AI systems such as GPT-4.

Related Articles

Leave a Reply

Back to top button