Why it matters: Generative AI presents enormous potential for misuse. Scams and cyber attacks on financial systems come to mind. However, a new study indicates that the leading category of misuse is influencing political opinion with false content. It has posed a problem in past campaigns. We can expect it to become even more prevalent in this election cycle.

A new study by Google DeepMind shows that AI-generated political content is a far more likely misuse of the technology than a cyber attack. DeepMind based its conclusions on an analysis of reported cases of GenAI misuse between January 2023 and March 2024. In fact, a video about Joe Biden had been making the rounds last year even though it was declared a deepfake.

Chances are good that we will see more examples of this form of manipulation as the political campaign season heats up. The study found that shaping public opinion was the most common goal for exploiting GenAI capabilities, making up 27 percent of all reported cases. Malicious actors could deploy several tactics to distort the public's perception of political realities, including impersonating public figures, creating falsified media, and using synthetic digital personas to simulate grassroots support for or against a cause – otherwise known as astroturfing.

Bad actors could easily manipulate legitimate videos to depict electoral candidates appearing visibly aged and unfit for leadership. Although more complex, a skilled AI artist could fabricate a video from scratch that puts an opponent in a compromising position.

The report notes that an emerging, though less prevalent, trend is the undisclosed use of AI-generated media by political candidates and their supporters to construct a positive public image. One example is a Philadelphia sheriff who used generative AI to fabricate positive news stories for her campaign website.

Political players also use generative AI for hyper-targeted political outreach, such as simulating a politician's voice with high fidelity to reach constituents in their native languages or deploying AI-powered campaign robocallers to engage in tailored conversations with voters on critical issues.

These tactics might sound familiar because political campaigns have already used them long before generative AI existed. The difference is the rapid advances in recent AI models give these age-old tactics new potency and democratize their access.