Why Sam Altman is leaving the OpenAI security committee

In May of this year, the technology company announced the creation of a security committee whose mission would be to provide security recommendations on OpenAI projects and functions. But because Altman was the one running it, he was mistrusted from the start.

OpenAI recently announced the names that will be part of its Safety Committee . Sam Altman founder and CEO of the company, is no longer on this list.

In a announcement published on its official blog the company dedicated to artificial intelligence announced that the group would now become a independent supervisory committee of the board of directors. The group will be chaired by Zico Kolter, director of the machine learning department at Carnegie Mellon University.

Other members are Adam D’Angelo, co-founder and CEO of Quora; Paul Nakasone, retired US Army general; and Nicole Seligman, former President and General Counsel of Sony Corporation.

Altman is not the only one to have left the renewed Security Committee. OpenAI Board Chairman Bret Taylor, along with other technical and policy experts, also abandoned it.

What is the OpenAI Security Committee

OpenAI, backed by the giant Microsoft, has seen notable growth since the launch of the ChatGPT tool in November 2022.

It has since also faced a wave of questions and resignations from senior officials, many of whom have warned that OpenAI is not doing enough to control the dangers of artificial intelligence.

In May this year, the company decided to disband the team responsible for mitigating long-term AI risks, just a year after its creation, according to CNBC . Jan Leike, one of the leaders of this team, announced after his departure that “OpenAI’s security culture and processes took a back seat to brilliant products.”

During the same month, OpenAl announced the creation of a security committee. Their mission, as described in their blog is to make recommendations “on critical safety and security decisions for OpenAI projects and operations.”

FILE PHOTO: Illustration shows the OpenAI logo
OpenAI launched ChatGPT in 2022. Photo: REUTERS/Dado Ruvic.

Initially, those on the committee were Altman and his three advisors, Taylor, D’Angelo and Seligman. OpenAI technical and policy experts such as Readiness Lead Aleksander Madry; Lilian Weng, security systems manager; John Schulman, Head of Alignment Science, Matt Knight, Head of Safety; and Jakub Pachocki, chief scientist, were also members of the entity.

But given that Altman was responsible for overseeing himself and all of his projects at OpenAI, the announcement didn’t inspire much confidence.

Why Sam Altman left the OpenAI security committee

From its creation, the first mandate given to the independent body was to evaluate OpenAI’s processes and safeguards for a period of 90 days, then to formulate recommendations.

In mid-September, the California, US-based company revealed initial recommendations in five important areas. First on the list is the establishment of independent safety and security governance.

As shown time , As Altman not only sits on OpenAI’s board of directors but is also responsible for overseeing business operations as CEO, he was no longer part of the security committee.

According to OpenAI’s statement, this recommendation also specifies that the group will receive “information from company leaders on security assessments of significant model releases” and will exercise oversight over these processes. This last element includes “the power to delay a launch until security issues are resolved.”

The committee’s other recommendations indicate that security measures need to be improved, by being transparent in the work of OpenAI (which plans to warn of the capacity and risks of its models), by collaborating with external organizations and by unifying security frameworks for the development and monitoring of business models.

Sam Altman
Photo: Sam Altman.

OpenAI controversies

In August, OpenAI expressed reluctance over a legislative initiative to impose stricter security requirements on the state of California’s largest artificial intelligence companies.

According to the bill, called SB 10 47, Companies in the sector must ensure that AI systems can be disabled and that these technologies do not cause disasters. If they fail to meet these requirements, they could face legal action.

In a letter cited by Bloomberg, Jason Kwon, chief strategy officer at OpenAI, noted that “The AI ​​revolution is only just beginning, and California’s unique status as a global leader in AI is driving the state’s economic dynamism.” . He said the project would “threaten that growth” and force the company’s employees to leave California “in search of better opportunities elsewhere.”

Shortly after, two former OpenAI workers, who had specifically expressed concerns about the company’s security, sent a letter to California Governor Gavin Newsom commenting on OpenAI’s rejection of the project.

Another situation that has put the startup in the spotlight is that it is on the verge of one of the most significant changes in its history.

In 2015, it was founded as a non-profit organization and four years later it became a limited-profit entity. Now, the company that created ChatGPT is reportedly looking to make its core business a for-profit company. which will not be managed by its board of directors, Reuters reported.

Reportedly, the transformation could not only increase investor interest, but could also impact how the company manages risks related to artificial intelligence.

Source: Latercera

Related articles

Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share article

Latest articles

Newsletter

Subscribe to stay updated.