California usually units the tone for nationwide coverage, however the lately handed AI Transparency Act (SB 942) is an instance of what to not do. Regardless of its well-intentioned goal to handle dangerous AI-generated content material, the regulation misunderstands each the mechanics of AI and the way folks reply to misleading media. As an alternative of providing sensible options, this regulation imposes impractical burdens on customers and suppliers, stifles innovation, and falls far wanting its guarantees. This act serves as a cautionary story for the remainder of the nation about laws that fails to each defend society and promote progress.
The regulation mandates suppliers of generative AI instruments with over a million month-to-month customers to supply three key companies: (1) a free detection software permitting customers to add content material—corresponding to photographs, movies, or audio—to examine if it was generated or altered by the supplier’s AI system; (2) an possibility that offers customers the selection so as to add a transparent, seen, and unremovable label on any AI-generated content material, indicating that the content material was created or altered by an AI system; and (3) n automated, hidden, non-removable label embedded in AI-generated content material that features the supplier’s identify, the AI system model, the content material’s creation or modification date and time, and a novel identifier. Non-compliant corporations face a $5,000 day by day tremendous.
The regulation’s largest flaw is its reliance on watermarks—a novel sign embedded in AI-generated content material. Nevertheless, watermarks are weak to manipulation and circumvention. As an illustration, cropping an AI-generated picture can take away a visual watermark, whereas extra refined modifying can erase even essentially the most sturdy invisible watermarks. Except watermarking expertise advances considerably, mandating their use dangers creating a spot between regulatory intentions and technical feasibility.
Even when watermarks sometime turn into correct, dependable, and universally utilized to all AI-generated content material, they merely received’t forestall many high-risk eventualities, as persons are unlikely to examine for watermarks in moments of disaster. As an illustration, a standard AI-driven menace is voice cloning scams, the place victims obtain hyper-realistic calls or recordings from somebody posing as a pal or member of the family, urging them to reply to an pressing scenario. These scams succeed as a result of folks usually act emotionally, not logically, in annoying moments. Even with extensively accessible and user-friendly AI watermark detection instruments, panic can override warning, making watermarks ineffective towards a few of the most critical dangers.
Defenders of the regulation might argue that regulation has to begin someplace, however this laws dangers making the issue worse. Every AI mannequin can have its personal watermark detection software, so checking a chunk of content material for a watermark doesn’t inform customers whether or not a watermark exists—solely whether or not a watermark from that specific mannequin is current.If customers add suspicious content material to a detection software and it doesn’t discover a watermark (as a result of the content material wasn’t generated by that particular platform), they could be falsely reassured. Certainly, customers might get unfavourable outcomes from dozens of detection instruments—not as a result of the instruments don’t work, however as a result of the content material wasn’t generated by any of these AI fashions—thereby creating the notion that pretend materials is professional. Fairly than lowering dangers, the regulation may unintentionally amplify the hurt it goals to forestall by reinforcing false claims of authenticity.
Furthermore, the regulation solely applies to massive AI suppliers, leaving smaller techniques unregulated and creating loopholes that dangerous actors may exploit. Unregulated fashions could possibly be used to supply content material with out watermarks, introducing vital gaps that weaken the regulation’s effectiveness. California’s jurisdiction additionally stops at its borders, placing California-based corporations at a drawback attributable to compliance prices that out-of-state and overseas opponents can simply keep away from. In consequence, as an alternative of curbing dangerous AI practices, the regulation dangers driving innovation elsewhere whereas failing to handle the borderless nature of AI dangers.
Lastly, the regulation creates new challenges by putting an undue burden on AI startups, forcing them to develop and implement watermark detection instruments on a good timeline—with lower than 16 months to conform earlier than the regulation takes impact in January 2026. Early-stage corporations are actually compelled to divert sources from innovation that might profit their clients and improve competitiveness with overseas rivals. As an alternative, they need to prioritize compliance with ineffective rules that in the end stifle development.
Fairly than setting a accountable precedent for AI regulation, California showcases the dangers of poorly designed laws. Policymakers ought to discover a technique to steadiness security with technological progress, however SB 942 solely serves as a reminder of how simply regulation can misfire. For different states watching, the takeaway is obvious: California’s method to regulating AI is just not the best way ahead.