AI workers Open Letter
AI workers Open Letter

AI workers call for enhanced Whistleblower protections in Open Letter

What To Know

  • A group of current and former employees from leading AI companies like OpenAI, Google DeepMind, and Anthropic has signed an open letter advocating for greater transparency and protection against retaliation for those who voice concerns about AI.
  • Kokotajlo resigned from the company, citing a loss of confidence in OpenAI’s ability to responsibly develop artificial general intelligence (AGI), a term for AI systems that are as smart or smarter than humans.
  • He emphasized the lack of oversight and the dangerous nature of silencing researchers, who are currently among the few capable of warning the public about these risks.
  • In a statement to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems” and believes in its “scientific approach to addressing risk.

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind, and Anthropic has signed an open letter advocating for greater transparency and protection against retaliation for those who voice concerns about AI.

The letter, published on Tuesday, highlights the importance of accountability and the critical role that employees play in this regard. It points out that, in the absence of effective government oversight, employees are among the few who can hold these corporations accountable. However, broad confidentiality agreements often prevent them from speaking out, except within the very companies that may be ignoring these issues.

This letter comes shortly after a Vox investigation revealed that OpenAI had tried to silence recently departing employees by making them choose between signing a stringent non-disparagement agreement or losing their vested equity in the company.

Following the report, OpenAI CEO Sam Altman expressed his embarrassment over the provision and claimed it has been removed from recent exit documentation, though its current status for some employees remains unclear.

The 13 signatories of the letter include former OpenAI employees Jacob Hinton, William Saunders, and Daniel Kokotajlo. Kokotajlo resigned from the company, citing a loss of confidence in OpenAI’s ability to responsibly develop artificial general intelligence (AGI), a term for AI systems that are as smart or smarter than humans.

The letter, endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, expresses deep concerns about the lack of effective government oversight for AI and the financial incentives driving tech giants to rapidly advance this technology.

The authors warn that unchecked development of powerful AI systems could lead to misinformation, increased inequality, and a potential loss of human control over autonomous systems, which might even result in human extinction.

Kokotajlo wrote on X that there is still much unknown about how these systems work and whether they will remain aligned with human interests as they become more intelligent, potentially surpassing human-level intelligence in all areas. He emphasized the lack of oversight and the dangerous nature of silencing researchers, who are currently among the few capable of warning the public about these risks.

In a statement to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems” and believes in its “scientific approach to addressing risk.”

The spokesperson added that rigorous debate is crucial given the significance of this technology and that OpenAI will continue to engage with governments, civil society, and other communities worldwide.

The signatories are urging AI companies to commit to four key principles:

  1. Refraining from retaliating against employees who voice safety concerns.
  2. Supporting an anonymous system for whistleblowers to alert the public and regulators about risks.
  3. Allowing a culture of open criticism.
  4. Avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out.

This letter arrives amid increasing scrutiny of OpenAI’s practices, including the disbandment of its “superalignment” safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company’s prioritization of “shiny products” over safety.