AI News

OpenAI forms safety committee amidst former research scientist joining rival AI firm

OpenAI forms safety committee amidst former research scientist joining rival AI firm

Microsoft-backed OpenAI has created a new safety and security committee to address security concerns about its AI infrastructure. The committee, which is chaired by OpenAI CEO Sam Altman, board chair Bret Taylor, and board members Adam D’Angelo and Nicole Seligman, is responsible for making recommendations on critical safety and security decisions for all OpenAI project boards. 

OpenAI also emphasized that it is forming a new safety and security committee to explore how to manage the risks posed by new AI models and future technologies. This news comes after the departure of the co-founder of OpenAI and senior scientist of the “superalignment” team, Ilya Sutskever, on May 14. The news of Sutskever’s departure, as described in the Open AI blog post, shocked and surprised AI enthusiasts all over the world. The news was a pivotal moment that stunned Silicon Valley and raised doubts about Altman and his company’s ability to navigate technology in the age of artificial intelligence.

The dissolution of OpenAI’s former “superalignment” team came to be due to safety concerns and disagreement in the common company vision. OpenAI also reported that it has “just begun training its next frontier prototype” and “predicts that the resulting systems will take (OpenAI) to the next level of capability.” 

Now, the tech behemoth, in pursuit of restructuring its former team, has created the safety committee. The committee’s first task would be to review and improve OpenAI’s policy and security and present its comments and recommendations to the board within 90 days. The company announced on X (formerly Twitter) that it would later make the adopted recommendations available to the public, prioritizing safety and security. 

Furthermore, OpenAI is preparing to train an upcoming model that is expected to replace OpenAI’s GPT-4. This model will be better at tasks such as image generators, virtual assistants, search engines, and OpenAI’s popular chatbot, ChatGPT. Open AI stated in an X post that the public can use the new model with certain limitations: “All ChatGPT Free users can now use (ChatGPT 4-o) to browse, vision, data analysis, file uploads, and GPTs.”

However, the full version is far from being released. Once the training is complete, the safety committee will spend additional months testing and tweaking the technology to ensure it is ready for public use.

The AI War

Forming a new committee could help ease AI security concerns, which would focus on aligning AI systems to ensure that security is back on track. The security committee comes amidst the ongoing controversy over the resignation of the lead safety researcher of the Superalignment team at OpenAI, Jan Leike, given his criticism of OpenAI for letting security “take a back seat to shiny things” and be thrown forward. 

Further, as Jan Leike joined the rival firm of OpenAI, Anthropic, an artificial intelligence start-up backed by Amazon, the recent developments in the highly-saturated AI market leave room to speculate internal bedlams taking place not just between the two AI firms but also amidst their financers. However, establishing a new OpenAI Safety and Security Committee acts as a way to reinforce the organization’s commitment to responsible and safe AI development. 

After recent high-profile departures and internal struggles, this committee plays a vital role in restoring trust among stakeholders and ensuring that security concerns arise and are appropriately addressed. The board’s focus on reviewing and improving OpenAI policies and security measures demonstrates a proactive approach to mitigating the risks associated with advanced AI technologies.

The commitment to make the committee’s recommendations publicly available within 90 days underscores OpenAI’s commitment to transparency. This openness is essential in scrutinizing collaboration in terms of often obscure practices and their potential social impact. By sharing its security mechanisms with the public, OpenAI is trying to rebuild the trust it lost during Ilya Sutskever and Jan Leike’s departure from the company over AI safety concerns.

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
ToAI Team
Fueled by a shared fascination with Artificial Intelligence, the Times Of AI journalists team brings together various researchers, writers, and analysts. We aim to provide a comprehensive knowledge of AI for a broad audience of the Times Of AI. Through in-depth analysis of the latest advancements, investigation of ethical considerations around AI development, AI governance, machine learning, data science, automation, cybersecurity, and discussions about the future impact of AI across various sectors, we aim to empower readers with the details they need to navigate this rapidly evolving field.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:AI News