Three reasons Wisconsin's approach to AI is a model for other states
A bipartisan win on generative AI and political ads
Last week, Wisconsin Governor Tony Evers signed Assembly Bill 664 into law, an important step towards preventing the abuse of synthetic content in Wisconsin elections. And it’s not a complicated policy; by requiring the disclosure of AI-generated audio or visual content in political ads, AB 664 takes a balanced approach to safeguarding Wisconsin voters’ informed election choices, starting this November. In doing so, the newly minted law offers a model for policymakers, especially at the state level, of a logical first step towards oversight of the intersection of AI and elections in the U.S.
Three reasons why AB 664 is worth celebrating:
AB 664’s introduction and passage was a bipartisan effort. The bill was introduced by nine Democrats and fifteen Republicans in the Wisconsin House, along with two Democrats and three Republicans in the Senate. It shows that even in a politically fraught election year, policymakers can find common ground on this issue.
AB 664 focuses on the formats (audio and video) of synthetic content that are especially likely to confuse voters accustomed to trusting their eyes and ears. While questioning text-based information is not new, audiences are only starting to adjust to audio and visual synthetic media being increasingly indistinguishable from human-created content. In the high stakes of political communications, voters can no longer rely on their senses alone. By requiring that political ads with audio or video synthetic media clearly disclose they “contain content generated by AI,” AB 664 will help Wisconsin voters distinguish the synthetic from the authentic in political ads in realtime.
Finally, the Wisconsin bill’s focus on disclosing, rather than prohibiting, the use of synthetic content. This makes it less likely to run afoul of the First Amendment. Because approaches to regulating synthetic content are nascent, there are significant questions that remain. This includes questions around what types of generative AI-focused limitations or mitigations could survive a First Amendment challenge, particularly when limits apply to specific categories of content. But while those questions exist around mandatory disclosure — like AB 664 requires — they are fewer and less concerning than bans on the use of synthetic content, no matter how narrow. In other words, policies that focus on disclosure are more likely to avoid the First Amendment issues that accompany approaches that censor or suppress speech.
Big picture, AB 664’s passage is a win. That’s not to say it is a silver bullet that addresses all the ways synthetic content can be used to amplify and accelerate threats to the election information environment. There isn’t one solution – in policymaking or otherwise – that alone can fully address these AI-enabled challenges.
That said, AB 664 is a hopeful example of a bipartisan, focused, and free speech-respecting step towards safeguarding elections in the era of AI.
Here’s to more state lawmakers following in Wisconsin’s footsteps.