AI News

OpenAI and Google DeepMind’s workers fear ‘Human Extinction’ amid AI safety concerns

OpenAI and Google DeepMind's workers fear 'Human Extinction' amid AI safety concerns

On Tuesday, OpenAI, Alphabet Inc., and Anthropic’s current and former employees expressed their concerns regarding the potential risks associated with the emerging AI technology. Thirteen current and former OpenAI employees and one current and former Google DeepMind employee have written an open letter expressing concerns about AI companies’ profit-making motives, which they believe can lead to “human extinction” without appropriate oversight.

The letter stressed that corporate structures within the AI field are not believed to be adequate to effect the desired change.  It underscored the risks of unregulated AI, such as fake news spreading, AI system centralization, and other social biases. These risks are considered material and can significantly impact society. 

Companies such as OpenAI and Microsoft provide image-generative capabilities, but their generators can result in disinformation-based photos related to elections despite their own terms of service. The letter further revealed that AI companies are under no obligation to report to states on the capabilities or limits of their systems. It emphasized that we cannot trust AI companies—those that generate profits in the United States—to provide information on their own.

The open letter states that artificial intelligence technology has the potential to produce written text, images, and audio that closely resemble human creation almost instantaneously and at a very low cost, potentially leading to the “extinction of humanity.” 

The group is also demanding AI companies establish a mechanism to enable both current and former employees to voice their concerns about potential harms, which they are now unable to do due to confidentiality agreements that ban staff from criticizing the organization. “They (AI companies) currently have only weak obligations to share some of this information with governments and none with civil society,” stated the letter.

The New York Times first published the letter, stating that employees of AI companies are unable to discuss AI-related risks and should have the opportunity to share their concerns with board members anonymously. Those same concerns saw OpenAI lose two more high-profile personnel the previous month when co-founder Ilya Sutskever and key safety researcher Jan Leike quit the company. In its wake, Leike said OpenAI had abandoned a safety culture in favor of chasing after “shiny products.” 

However, Sam Altman, CEO of OpenAI, welcomed the report after its publication and pledged to improve off-boarding. “We’re proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” claimed an OpenAI representative. While AI supervision remains limited, corporate stakeholders should own up, stated the employees. Numerous employees petitioned enterprises to rescind nondisclosure contracts and establish safeguards that allow for the anonymous raising of concerns.

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
ToAI Team
Fueled by a shared fascination with Artificial Intelligence, the Times Of AI journalists team brings together various researchers, writers, and analysts. We aim to provide a comprehensive knowledge of AI for a broad audience of the Times Of AI. Through in-depth analysis of the latest advancements, investigation of ethical considerations around AI development, AI governance, machine learning, data science, automation, cybersecurity, and discussions about the future impact of AI across various sectors, we aim to empower readers with the details they need to navigate this rapidly evolving field.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:AI News