By Thomas R. Bundy III & Andrew D. Herman
Thomas R. Bundy III co-founded the law firm Lawrence & Bundy where he focuses on defense-side, commercial litigation and internal investigations. Andrew D. Herman is chair of Lawrence & Bundy’s political law group where he focuses on elections and other political activity.
Shortly after his inauguration, Maryland Gov. Wes Moore (D) visited a research institute addressing artificial intelligence, machine learning, and virtual and augmented reality. He touted the project as “a perfect example of how Maryland can become more economically competitive by creating opportunities through innovative partnerships.” As the state embraces the promise of AI, however, it must also address the risks presented by the technology.
For example, AI is a major element in the current Hollywood strikes. SAG-AFTRA’s president, Fran Drescher, summarized the concern: “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”
Other public figures who rely on visual media for promotion will also confront this issue. But, unlike Hollywood talent, this group can address the threat unilaterally. A recent editorial in The Washington Post summarized the problem: “Get ready for lots of literally unbelievable campaign ads. AI could wreak havoc on elections.” As such, Maryland’s elected officials should move decisively on this issue.
AI’s threat to political discourse is real. Candidates for the Republican presidential nomination have already shared AI-enabled parodies mocking their opponents, and the Republican National Committee recently aired a fake video depicting a future hellscape under President Biden. Some of these ads disclosed the use of AI, some did not.
And things can get worse. As the elections draw closer, the temptation to fabricate more extreme ads may prove too tempting. After all, if an AI-enabled deception is effective it’s far easier to ask for forgiveness afterward, especially if no specific legal constraints exist. The wide latitude courts currently grant to political speech hamper effective responses to these tactics. Victory in a defamation suit months after an election will provide little recompense for a losing candidate smeared by an AI invention.
Further, the last decade has provided a raft of foreign attempts to interfere with domestic elections through social media and other venues. It’s not hard to envision foreign actors deploying AI in 2024 to wreak havoc and discredit American candidates and officeholders.
The best solution would, of course, be a federal law imposing nationwide standards for the use of AI in political discourse, penalizing violations, and authorizing victims to remove clear violations expeditiously. In May, Sen. Amy Klobuchar (D-Minn.) and Rep. Yvette Clarke (D-N.Y.) introduced bills in their respective chambers. The REAL Political Advertisements Act would require full disclosure of AI-generated content in political ads. Other, more restrictive proposals, including a bill establishing criminal punishment for creation of “fake electronic media that appears realistic,” have fizzled in Congress. Capitol Hill’s current dysfunction makes it unlikely that the Congress will impose effective reforms soon.
The chance for regulation in the executive branch is slightly better. In June, the regulator with authority to address this issue, the Federal Election Commission, deadlocked on proposed regulations on political ads using AI. The FEC tried again this August, seeking public comment on a request for a rulemaking specifying that using false AI-generated content, or “deepfakes,” in campaign ads violates the federal prohibition on fraudulent misrepresentation of campaign authority. Although he voted to publish this request, Commissioner Allen Dickerson, said that AI remains an issue for Congress, identifying “serious First Amendment concerns lurking in the background of this effort.”
Things are more promising in the states, as California, Minnesota, Texas, Washington have all enacted restrictions on AI use since 2019. While these laws vary in scope, they present a variety of options for Maryland to emulate.
Existing state laws establish the pillars of a sound AI policy that will survive First Amendment scrutiny from the federal courts, especially a skeptical Supreme Court. An effective law should include the following elements:
- A sound and clear description of what constitutes deceptive use of AI in audio/visual media.
- Time limits on application. In Minnesota, California, and Texas the limits are 90, 60, and 30 days before an election, respectively.
- A safe harbor for most AI uses if the advertisement discloses the manipulation. Washington’s law, for example, contains specific rules to ensure that viewers can see and understand the disclosure.
- Criminal penalties for willful violation of the law.
- And, vitally to protect election integrity, provisions enabling affected candidates to seek an immediate injunction preventing display of the ad, along with significant damages for willful infringers.
Whether AI technology will improve the world or create a “Skynet” dystopia is unclear. That it will profoundly affect how candidates run for office and how political actors compete in the marketplace of ideas is not. With elections approaching, Maryland’s elected officials should address this issue expeditiously.