Skip to content
Menu
menu

Image by iStock

California’s Legislature Passes Major AI Regulation to Implement Safety, Security Measures

California’s legislature passed a major artificial intelligence (AI) regulation bill, which now awaits Governor Gavin Newsom’s signature to become law.

Bill SB 1047—the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act—would create safeguards to protect critical infrastructure from cyberattacks that leverage AI, as well as create provisions to prevent AI from being used for automated crime or to develop chemical, nuclear, or biological weapons.

California Senator Scott Weiner sponsored the legislation, which he called a “commonsense measure” to codify commitments that large AI companies—many of which are headquartered in California—have already made.

“Experts at the forefront of AI have expressed concern that failure to take appropriate precautions could have severe consequences, including risks to critical infrastructure, cyberattacks, and the creation of novel biological weapons,” Weiner’s office said in a press release. “A recent survey found 70 percent of AI researchers believe safety should be prioritized in AI research more while 73 percent expressed ‘substantial’ or ‘extreme’ concern AI would fall into the hands of dangerous groups.”

What AI is Affected?

The legislation would apply only to covered models, which would be defined by a new oversight board—the Frontier Model Division—on 1 January 2028.

The division would be composed of five members (a member of the open-source community, a member of the AI industry, and a member of academia, all appointed by the governor, plus a member appointed by the speaker of the assembly, and a member appointed by the Senate Rules Committee).

Between now and 1 January 2027, however, covered models mean:

  • AI models trained using a quantity of computer power greater than 10^26 integer or floating-point operations, the cost of which exceeds $100 million when calculated using average market prices of cloud compute at the start of training.

  • AI models created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations.

After 1 January 2027, covered models could mean:

  • AI models trained using a quantity of computing power determined by the Frontier Model Division, the cost of which exceeds $100 million when calculated using average market prices of cloud compute at the start of training.

  • AI models created by fine-tuning a covered model using a quantity of computing power that exceeds a threshold set by the Frontier Model Division.

In an assessment of the legislation, law firm Morgan Lewis said the bill shows California intends to be at the forefront of developing AI nationally.

“The bill is currently focused on only the largest and most powerful AI models, and, based on current computing power and cost thresholds, the requirements are less likely to impact AI startups, at least in the near term,” according to the firm.

What Steps Would Developers Have to Take?

Prior to training a covered AI model, developers would have to create the capability for the model to be shutdown. They would also need to implement a written and separate safety and security protocol that contains the following information:

  • Provides reasonable assurance that the developer will not create a covered model or derivative model that poses an unreasonable risk of causing or enabling critical harm.

  • States compliance requirements objectively and with detail.

  • Identifies tests and test results that the covered model or its derivatives do not pose an unreasonable risk of causing or enabling critical harm.

  • Details testing procedures to assess the risks associated with post-training modifications.

  • Details for third parties to replicate testing procedures.

  • Describes how the developer intends to implement safeguards and security requirements.

  • Describes how the safety and security protocol could be modified.

The legislation mandates that developers “ensure that the safety and security protocol is implemented as written, including by designating senior personnel to be responsible for ensuring compliance by employees and contractors working on a covered model, monitoring, and reporting on implementation.”

Additionally, the legislation would prevent developers from using covered models—for commercial or public use—if there is an “unreasonable risk that the covered model or covered model derivative can cause or enable a critical harm,” according to the California legislative counsel’s digest.

Critical harms are defined in the legislation as:

  • Creating or using a chemical, biological, radiological, or nuclear weapon in a way that results in mass casualties.

  • Mass casualties or at least $500 million in damage because of a cyberattack on critical infrastructure by a model, which provides precise instructions for the attack—or a series of attacks.

  • Mass casualties or at least $500 million in damage that results from an AI model engaging in conduct that both acts with limited human oversight, intervention, or supervision, and results in death, great bodily injury, property damage, or property loss that if committed by a human would constitute a crime under the penal code that requires intent, recklessness, gross negligence, or solicitation, aiding, and abetting of a crime.

Developers would also need to annually retain a third-party auditor to perform independent audits to comply with requirements of the bill, beginning 1 January 2028.

Other requirements include mandating that AI developers of covered models report all AI safety incidents within 72 hours that affect their models or derivative models to the regulatory body, the Frontier Model Division. The legislation defines AI safety incidents as those that “demonstrably increases thee risk of a critical harm occurring” by the following means:

  • A covered model autonomously engages in behavior other than that requested by users.

  • Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model.

  • Critical failure of technical or administrative controls, including those limiting the ability to modify a covered model.

  • Unauthorized use of a covered model to cause or enable critical harm.

Like with Know Your Customer requirements in finance, the legislation also creates provisions for developers to gather information about potential customers who seek to use resources to train a covered model. These include obtaining the prospective customer’s basic identifying information and business purpose, means and sources of payment (financial institution, credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or address identifier), and email address and telephone contact information to verify the customer’s identity.

Developers would also be required to assess if prospective customers intend to use computing clusters to train a covered model, retain customers Internet Protocol (IP) addresses for access or administration, maintain these records on customers for at least seven years, and create the ability to enact a shutdown of resources being used to train or operate models under customers’ control.

What are the Penalties for Non-Compliance?

Following feedback from stakeholders, legislators amended the bill to allow the California attorney general or labor commissioner to bring civil actions against violators and create whistleblower protections for reports of wrongdoing.  

In these proceedings, the attorney general or labor commissioner is eligible to recover up to 10 percent of the cost of the quantity of computing power used to train the covered model, and up to 30 percent of that value for subsequent violations. They are also eligible for injunctive or declaratory relief, monetary damages, attorney’s fees, and other relief that the court deems appropriate.

The legislation also voids any contract or agreements that attempt to wave or shift liability to a person or entity in exchange for using the developer’s products or services.

Who Opposes the Bill?

While the legislation passed in California with bipartisan support, many prominent legislators and AI developers are opposed to the bill—including computer scientist Fei-Fei Li and U.S. Representative Nancy Pelosi (D-CA).

“The view of many of us in Congress is that SB 1047 is well-intentioned but ill informed,” Pelosi said in a press release. “While we want California to lead in AI in a way that protects consumers, data, intellectual property, and more, SB 1047 is more harmful than helpful in that pursuit.”

The AI Alliance—which includes IBM and Meta—issued a letter opposing the legislation. Members said the bill would “slow innovation, thwart advancements in safety and security, and undermine California’s economic growth. The bill’s technically infeasible requirements will chill innovation in the field of AI and lower access to the field’s cutting edge, thereby directly contradicting the bill’s stated support ‘…to ensure that artificial intelligence innovation…is accessible to academic researchers and startups, in addition to large companies.’”

Reuters reports that other major AI developers, including Alphabet’s Google and OpenAI (the creator of ChatGPT), also oppose the legislation.

Governor Newsom’s office did not immediately respond to Security Management’s request for comment on this story. Newsom has until 30 September 2024 to veto the bill.

 

arrow_upward