More Than 1000 Experts Advocate For A Pause In The Out Of Control Development Of AI

A group of over 1,000 experts, including Elon Musk and Steve Wozniak, have issued an open letter calling for a temporary halt to the development of highly advanced AI technology. The letter specifically requests a pause of six months on the creation of AI more powerful than OpenAI’s GPT-4, citing significant risks to society and humanity.

According to the letter, there has been escalating competition among AI labs to build increasingly powerful AI systems that even their creators struggle to understand, predict, or control. The experts argue that this trend necessitates a collective effort to assess and address the potential risks associated with such advancements.

Elon Musk, who co-founded OpenAI but stepped down from its board in 2018, has expressed concerns about the organization’s shift towards a for-profit approach, particularly due to its close collaboration with Microsoft. This has raised questions about OpenAI’s adherence to its original mission of ensuring AI benefits humanity.

In response to the rapid progress of AI technology and its potential implications, Mozilla has recently announced the establishment of, a startup focused on creating an independent and open-source AI ecosystem that prioritizes addressing societal concerns.

The open letter proposes that during the pause, AI labs and independent experts collaborate to develop and implement standardized safety protocols for advanced AI design and development. These protocols should undergo rigorous audits and be overseen by external independent experts to ensure accountability.

In a separate development, the UK Government has released a whitepaper outlining its approach to AI regulation, which emphasizes fostering innovation while also introducing measures to enhance safety and accountability. However, the UK’s approach does not involve the establishment of a dedicated AI regulator, unlike the European Union’s approach.


Select Categories