One of Silicon Valley’s heaviest hitters is wading into the fight over California’s AI regulations, Politico reported.
Y Combinator — the venture capitalist firm that brought us Airbnb, Dropbox, and DoorDash — today issued its opening salvo against a bill by state Sen. Scott Wiener that would require large AI models to undergo safety testing.
Weiner, a San Francisco Democrat whose district includes YC, says he’s proposing reasonable precautions for a powerful technology. But the tech leaders at Y Combinator disagree, and are joining a chorus of other companies and groups that say it will stifle California’s emerging marquee industry.
“This bill, as it stands, could gravely harm California’s ability to retain its AI talent and remain the location of choice for AI companies,” read the letter, which was signed by more than 140 AI startup founders.
It’s the first time the startup incubator, led by prominent SF tech denizen Garry Tan, has publicly weighted in on the bill. They argue it could hurt the many fledgling companies Y Combinator supports — about half of which are now AI-related.
Adam Thierer posted “Coalition Letter on California SB-1047, “The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.” on R Street:
Dear Senator Wiener and members of the California State Legislature,
We, the undersigned organizations and individuals, are writing to express our serious concerns about SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. We believe that the bill, as currently written, would have severe unintended consequences that could stifle innovation, harm California’s economy, and undermine America’s global leadership on AI.
Our main concerns with SB 1047 are as follows:
The application of the precautionary principle, codified as a “limited duty exemption,” would require developers to guarantee that their models cannot be missed for various harmful purposes, even before training begins. Given the general-purpose nature of AI technology, this is an unreasonable and impractical standard that could expose developers to criminal and civil liability for actions beyond their control.
The bill’s compliance requirements, including implementing safety guidance from multiple sources and paying fees to fund the Frontier Model Division, would be expensive and time consuming for may AI companies. This could drive businesses out of California and discourage new startups from forming. Given California’s current budget deficit and the state’s reliance upon capitol gains taxation, even a marginal shift of AI startups to other states could be deleterious to the state government’s fiscal position…
Y Combinator also posted a separate letter to Senator Wiener and two people who are on important committees. Here is a small piece from that letter:
Liability and regulation that is unusual in its burdens: The responsibility for the misuse of LLMs should rest with those who abuse these tools, not with the developers who create them. Developers cannot predict all possible applications of their models, and holding them liable for unintended misuse could stifle innovation and discourage investment in AI research. Furthermore, creating a penalty of purjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software – a standard product liability no other product in the world suffers from.
In my opinion, it appears that Y Combinator has concerns about California’s rules regarding safety in AI. I’m not sure why the company is so upset about the state requiring safety protocols in their AI.