- Mar 13, 2025
Loading
In a groundbreaking move, California is on the verge of establishing pioneering safety measures for artificial intelligence (AI) systems. This initiative, which recently passed a crucial vote in the State Assembly, could set the stage for national regulations as the technology evolves at an unprecedented rate.
The legislation aims to mitigate the potential risks associated with AI by mandating that companies:
This is particularly relevant in light of concerns that advanced AI technologies could be manipulated to carry out catastrophic actions, such as disrupting the state's electric grid or facilitating the creation of chemical weapons—fears that experts believe could materialize as the industry advances.
As lawmakers engage in a flurry of votes during the final week of the session, this bill has garnered mixed reactions. Governor Gavin Newsom now faces a critical decision: he can sign the bill into law, veto it, or allow it to become law without his signature.
The bill, which requires one final vote in the Senate, is a part of hundreds of proposals under consideration. Supporters believe it sets necessary foundational safety standards for large-scale AI models in the U.S., targeting systems that require over $100 million in data for training—an investment threshold that no current AI models have yet reached.
Assemblymember Devon Mathis, a Republican, emphasized the need for regulations: “It’s time that Big Tech plays by some kind of a rule, not a lot, but something. The last thing we need is for a power grid to go out, for water systems to go out.”
However, the proposal has met fierce opposition from tech giants and venture capitalists, including OpenAI, Google, and Meta. Critics argue that safety regulations should come from the federal level rather than state legislation, claiming this bill misdirects focus from those who misuse AI technology.
Former House Speaker Nancy Pelosi described the bill as “well-intentioned but ill-informed,” while industry group Chamber of Progress labeled it “based on science fiction fantasies.” Senior Tech Policy Director Todd O’Boyle stated that this legislation resembles scenarios from movies like *Blade Runner* or *The Terminator* rather than addressing real-world issues.
Interestingly, the legislation has found support from some unexpected quarters, including Anthropic, an AI startup backed by Amazon and Google. Following adjustments made to the bill, which softened some of its more stringent provisions, the company expressed that this legislation is vital for preventing the catastrophic misuse of AI systems.
Senator Scott Wiener, who authored the bill, has been transparent about his approach, stating that he seeks a “light touch” that allows for both innovation and safety to coexist. He criticized detractors for underestimating potential catastrophic risks from powerful AI models, asserting that if they truly believe these risks are unfounded, there should be no objection to the bill.
Wiener’s proposal is one of many aimed at addressing algorithmic discrimination, deepfakes, and public trust in AI. As California, home to numerous leading AI companies, continues to be a forerunner in AI technology, the state is also exploring how generative AI tools can tackle real-world issues like highway congestion and road safety.
As Governor Newsom weighs his options, one thing is clear: the conversation about AI safety is just beginning, and California is determined to lead the charge.
Comments
Leave a Reply