Group Co-LED by Fei-Fei Li Suggests that Ai Safety Laws Should AntiCipate Future Risks

In a new report, a california-based policy co-licad by fei-fei li, an ai pioneer, sugges that lawmakers should consider ai risks that “Haave not yet ben observed in the worry Ai regulatory policies.

The 41-Page Interim Report Released on Tuesday Comes from The Joint California Policy Working Group on Frontier Ai Models, An Effort Organized by Governor Gavin Newsom Folling His veto of california’s controversial ai safety bill, sb 1047While newsom found that SB 1047 Missed The MarkHe Acknowledged Last Year the need for a more extensive assessment of ai risks to inform legislators.

In the report, li, along with co-authors uc berkeley college of computing dean jennifer chayes and carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, Argue inavor of Laws that would increase transparency into what Frontier Ai Labs Such as Openai Are Building. Industry Stakeholders from across the ideological spectrum reviewed The report before its publication, including Staunch Ai Safety Advocates like Turning Award Winner Yoshua benjio benjio benjio benjio bee Who Argued Against SB 1047, Such as Databricks Co-Founder Ion Stoica.

According to the report, the novel risk by ai systems may Necessitate Laws That Would for AI MODEL DEVELOPERS to Publicly Report their safety tests, data accessitions, data actions, Measures. The report also Advocates for Increased Standards Around Third-Party evaliations of these metrics and corporate policies, in addition to expanded whistleblower protects for ai company Employees and Contestors.

Li et al. Write there’s an “Inconclusive Level of Evidence” for AI’s Potential to Help Carry Out Cyberatcks, Create Biological Weapons, or bringing about other “extrame” threats. They also argue, however, that ai policy should not only address current risks, but anticipate future consortesquences that might owned Occur for SAFEGUARDS.

“For example, we do not need to observe a nuclear weapon [exploding] To predict reliably that it could and would cause extensive harm, “The report states. Stakes and costs for inaction on frontier ai at this current moment are extramely high. “

The report recommends a two-pronged strategy to boost ai model development transparency: trust but verify. AI model developers and their employees should be provided avenues to report on area of ​​public concene, the report says, success as internal safety testing, whille also eating required to submitting to submit testing For third-party verification.

While the report, the final version of which is due out in June 2025, Endorses No Specific Legislation, it’s been well received by experts on both sides of the ai polecymaking debate.

Dean Ball, An AI-Focused Research Fellow at George Mason University who was critical of sb 1047, said in a post on x that the report was a promising step For California’s Ai Safety Regulation. It’s also a win for ai safety advocates, according to california state Senator Scott Wience, Who Introduced SB 1047 Last Year. Wierner said in a press release that the report builds on [in 2024],

The report appears to align with several components of SB 1047 and Wisener’s follow-up bill, SB 53Such as requiring ai model developers to report the results of safety tests. Taking a broader view, items to be a much-needed win for ai safety folks, Whose ageda has lost ground in the last year,

Leave a Comment