Former OpenAI employees say whistleblower protection on AI safety is not enough
Several former OpenAI employees warned in an open letter that advanced AI companies like OpenAI stifle criticism and oversight, especially as concerns over AI safety have increased in the past few months.
The open letter, signed by 13 former OpenAI employees (six of whom chose to remain anonymous) and endorsed by “Godfather of AI” Geoffrey Hinton, formerly of Google, says that in the absence of any effective government oversight, AI companies should commit to open criticism principles. These principles include avoiding the creation and enforcement of non-disparagement clauses, facilitating a “verifiably” anonymous process to report issues, allowing current and former employees to raise concerns to the public, and not retaliating against whistleblowers.
The letter’s signees claim current whistleblower protections “are insufficient” because they focus on illegal activity rather than concerns that, they say, are mostly unregulated. The Department of Labor states workers reporting violations of wages, discrimination, safety, fraud, and withholding of time off are protected by whistleblower protection laws, which means employers cannot fire, lay off, reduce hours, or demote whistleblowers. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues,” the letter reads.
Recently, several OpenAI researchers resigned after the company disbanded its “Superalignment” team, which focused on addressing AI’s long-term risks, and the departure of co-founder Ilya Sutskever, who had been championing safety in the company. One former researcher, Jan Leike, said that “safety culture and processes have taken a backseat to shiny products” at OpenAI.
This article was originally published by a www.theverge.com
Read it HERE