OpenAI, a leading artificial intelligence development company, has rolled out new measures in response to growing concerns regarding the potential risks posed by prgressive AI technology.
A notable name in this field, OpenAI’s past is marked by disputes involving former CEO Sam Altman. Ever since, discussions over the potential threats of advanced AI have echoed within the industry.
New Measures by OpenAI
Adopted measures encompass a reinforcement of authority for the Board of Directors, a step targeted at reducing risks. Under these provisions, the Board can impede the launch of AI perceived to be dangerous, irrespective of the management’s assurance of its safety.
Risk Evaluation Procedure
Steering the ship towards safety, Aleksander Madry’s team is commissioned with the regular assessment of potential risks. The team is vigilant for risks that can be termed as “catastrophic.” Such risks would lead to massive economic damages running into hundreds of billions or could cause widespread harm or death.
The risk categories under scrutiny include cybersecurity, and chemical, bacteriological, and nuclear threats. The assessment of these vital categories would ensure better preparedness in the face of potential threats posed by AI.
Monthly Reports on Findings
Armed with the task of ensuring greater safety, the team will submit a comprehensive monthly report outlining their findings. The report will provide insights into the AI’s performance and its potential impact, both positive and negative.
The basis for these measures is to alleviate the apprehensions of regulators who are growing increasingly worried about the implications of AI development. The goal is to strike a balance where technology can prosper without jeopardizing safety or inciting unnecessary fear.
I am Sofia, a tech-savvy journalist and passionate member of the “Jason Deegan” team. Growing up, I was always fascinated by the latest technological advancements and loved sharing my knowledge with others.