OpenAI Insiders Warn of 'Human extinction' Risk from AI System - Urges Better Whistleblower Protection

OpenAI Insiders Warn of 'Human extinction' Risk from AI System - Urges Better Whistleblower Protection

A group of current and former OpenAI and Google DeepMind employees claim that they "own substantial, nonpublic information about the capabilities and limitations of the system" that AI companies cannot rely on to voluntarily share.

This claim was made in a widely reported open letter, highlighting what the group described as a "serious risk" posed by AI. 

These risks include the further consolidation of existing inequalities, manipulation and misinformation, and loss of control over autonomous AI systems, with the potential for "human extinction.""They lamented the lack of effective oversight and called for greater protection for whistleblowers.

The authors of the letter said that AI can bring unprecedented benefits to society, and the risks they highlighted are lessened by the involvement of scientists, policymakers, and the general public, but they said that AI companies have financial incentives to avoid effective oversight.

AI companies claimed to know about the risk levels of different types of harm and the adequacy of protective measures, and the employee group said that companies had only a weak obligation to share this kind of information with governments and not with civil society.They added that extensive confidentiality agreements and blocking them from publicly expressing their concerns.

"The protection of ordinary whistleblowers is inadequate because they focus on illegal activities, but many of the risks we are concerned about are still unregulated," they write.

They called on advanced AI companies to create an anonymous system for employees to raise concerns so as not to retaliate against risk-related criticism.

On May 5, Vox reported that former OpenAI employees are barred from criticizing their former employers for the rest of their lives. If they refuse to sign the contract, they could lose all the vested interests they earned during their time working for the company. OpenAI Ceo Sam Altman later posted on x that the standard exit paperwork would be changed.

In response to the open letter, an OpenAI spokesperson told The New York Times that the company is proud of its track record of delivering the most capable and secure AI systems and believes in a scientific approach to dealing with risks.

"Given the importance of this technology, we agree that rigorous discussions are important and will continue to engage with governments, civil society and other communities around the world," the spokesperson said.

A Google spokesperson declined to comment.

The letter was signed by 13 current and former employees. All current OpenAI employees who signed it signed anonymously.

The AI world is no stranger to such open letters. Most famously, an open letter issued by the Future of Life Institute, signed by people like Elon Musk and Steve Wozniak, called for a 6-month pause in AI development.

Categories