New York state lawmakers have approved the RAISE Act, a landmark bill designed to prevent advanced artificial intelligence systems from causing large-scale disasters. Passed on Thursday, the legislation targets so-called “frontier AI models” — powerful systems developed by companies like OpenAI, Google, and Anthropic — and seeks to impose mandatory transparency and safety reporting requirements for models deployed in New York and trained with massive computing power.
The bill is a significant victory for the AI safety movement, whose calls for stricter oversight have often been drowned out by the tech industry’s push for innovation at speed. Advocates such as Nobel Prize-winning computer scientist Geoffrey Hinton and AI pioneer Yoshua Bengio have endorsed the RAISE Act as a critical step toward ensuring public safety as artificial intelligence evolves.
The bill’s provisions are narrow but potent. It applies only to developers whose models are trained using more than $100 million in computing resources — a threshold designed to exclude startups and academic labs — and made available to New York residents. Should these companies fail to comply with the law’s safety obligations, they could face civil penalties of up to $30 million, enforced by the state’s attorney general.
If signed into law by Governor Kathy Hochul, the RAISE Act would require qualifying companies to produce detailed safety and security documentation for their AI models. It would also obligate them to disclose any significant safety incidents, including those involving malicious actors stealing AI technology or systems behaving unpredictably. These requirements mark the first state-level legal framework in the U.S. focused specifically on regulating advanced AI systems.
The bill avoids some of the more contentious elements of other proposals, notably California’s SB 1047, which was vetoed earlier this year. Unlike SB 1047, New York’s legislation does not require AI developers to include a “kill switch” or hold them accountable for harms caused after post-training modifications. It also steers clear of regulating small companies and university labs — a key criticism of California’s effort.
According to Senator Andrew Gounardes, a co-sponsor of the bill, the legislation was crafted with precision to protect innovation while mitigating worst-case AI scenarios. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” Gounardes said. “The people who know AI the best say these risks are incredibly likely. That’s alarming.”
Still, the tech industry has mounted considerable opposition. Investors and venture capital firms like Andreessen Horowitz have labeled the bill counterproductive and harmful to U.S. competitiveness in AI. Anjney Midha, a general partner at the firm, described the measure on X as “yet another stupid, stupid state-level AI bill.” Anthropic co-founder Jack Clark, while refraining from issuing a formal position, expressed concern over the bill’s broad language and its potential impact on smaller companies, though the bill is not designed to regulate them.
New York State Assemblymember Alex Bores, who co-sponsored the legislation, dismissed the industry’s reaction as overblown. He emphasized that the regulatory burden is light and that no economic rationale supports withholding AI products from New York. “I don’t want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reason for [AI companies] to not make their models available in New York,” Bores said.
Critics have pointed to Europe’s strict tech regulations — which have prompted some AI developers to withhold their products from the region — as a cautionary example. However, supporters of the RAISE Act argue that New York’s economic significance, with the third-largest GDP of any U.S. state, makes it highly unlikely that companies would risk withdrawing from such a large market.
The bill now awaits Governor Hochul’s decision. She can sign it into law, send it back for revisions, or issue a veto. If enacted, New York would become the first U.S. state to codify AI safety standards of this magnitude, potentially setting a template for other jurisdictions seeking to balance innovation with public protection.
In an era where AI technologies are rapidly reshaping industries and societies, the RAISE Act may be a pivotal early move in defining how — and how much — governments should regulate artificial intelligence. Whether other states or federal lawmakers will follow remains an open question.