You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.
Silicon Valley companies still worry that state lawmakers are jumping the gun on regulating a still-unproven technology.
California lawmakers have amended a bill that would create new restrictions for artificial intelligence, paving the way for first-of-their-kind safety rules that could set new standards for how tech companies develop their systems.
The State Assembly’s Appropriations Committee voted on Thursday to endorse an amended version of the bill, S.B. 1047, which would require companies to test the safety of powerful A.I. technologies before releasing them to the public. California’s attorney general would be able to sue companies if their technologies caused serious harm, such as mass property damage or human casualties.
The bill has sparked fierce debate in the tech industry, with Silicon Valley giants, academics and investors taking sides on whether to regulate a nascent technology that has been hyped for both its benefits and its dangers.
Senator Scott Wiener, the author of the bill, made several concessions in an effort to appease tech industry critics like OpenAI, Meta and Google. The changes also reflect some suggestions made by another prominent start-up, Anthropic.
The bill would no longer create a new agency for A.I. safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred.
“The new amendments reflect months of constructive dialogue with industry, start-up and academic stakeholders,” said Dan Hendrycks, a founder of the nonprofit Center for A.I. Safety in San Francisco, which helped write the bill.