Artificial Intelligence & Machine Learning , Legislation & Litigation , Next-Generation Technologies & Secure Development
Regulating AI Catastophic Risk Isn't Easy
AI, Security Experts Discuss Who Defines the Risks, Mitigation EffortsAn attempt by the California statehouse to tame the potential of artificial intelligence catastrophic risks hit a roadblock when Governor Gavin Newsom vetoed the measure late last month.
See Also: New OnDemand Webinar | Overcoming Top Data Compliance Challenges in an Era of Digital Modernization
Supporters said the bill, SB 1047, would have required developers of AI systems to think twice before unleashing runaway algorithms capable of causing large-scale harm (see: California Gov. Newsom Vetoes Hotly Debated AI Safety Bill).
The governor's veto was a "setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet," said bill author Sen. Scott Wiener. Critics said the bill was too blunt a measure and would have stymied the state's AI tech industry.
If the veto leaves the state of AI safety regulation in the same place when state legislators convened earlier this year, proponents and detractors alike are nonetheless asking the same question: What next?
One obstacle the pro-regulatory team must grapple with is the lack of a widely-accepted definition for "catastrophic" AI risks. Little consensus exists on how realistic or immediate the threat is, with some experts warning of AI systems running amok and others dismissing concerns as hyperbole.
Catastrophic risks are those that cause a failure of the system, said Ram Bala, associate professor of business analytics at Santa Clara University's Leavey School of Business. Risks could range from endangering all of humanity to more contained impact, such as disruptions affecting only enterprise customers of AI products, he told Information Security Media Group.
Deming Chen, professor of electrical and computer engineering at the University of Illinois, said that if AI were to develop a form of self-interest or self-awareness, the consequences could be dire. "If an AI system were to start asking, 'What's in it for me?' when given tasks, the results could be severe," he said. Unchecked self-awareness might drive AI systems to manipulate their abilities, leading to disorder, and potentially catastrophic outcomes.
Bala said that most experts see these risks as "far-fetched," since AI systems currently lack sentience or intent, and likely will for the foreseeable future. But some form of catastrophic risk might already be here. Eric Wengrowski, CEO of Steg.AI, said that AI's "widespread societal or economic harm" is evident in disinformation campaigns through deepfakes and digital content manipulation. "Fraud and misinformation aren't new, but AI is dramatically expanding risk potential by decreasing the attack cost," Wengrowski said.
SB 1047 aimed to prevent accidental failures and malicious misuse of AI. A key feature was a requirement for developers to implement safety protocols, including cybersecurity measures and a "kill switch," allowing for the emergency shutdown of rogue AI systems. The bill also introduced strict liability for developers for any harm caused, regardless of whether they followed regulations. The bill calculated which models fell under its purview based on how much money or energy was spent on their training.
David Brauchler, technical director at NCC Group, said computational power, model size or the cost of training is a poor proxy for risk. In fact, smaller, specialized models might be more dangerous than large language models.
Brauchler also cautioned against alarmism, saying lawmakers should focus on preventing immediate risks, such as incorrect decisions by AI in safety-critical infrastructure, rather than hypothetical superintelligence threats. He advised a proactive approach to AI regulation, targeting harm-prevention measures that address present and tangible concerns, rather than speculative future risks. If new dangers emerge, governments can respond with informed legislation, rather than pre-emptively legislating without concrete data, he said.