Election ’24: UM-Dearborn expert on political deepfakes and how to stop them

September 26, 2024

Red, white and blue campaign buttons

Media contact: Kristin Palm | [email protected] | 313-593-5542

Hafiz Malik headshot
Hafiz Malik

With the U.S. presidential election less than six weeks away, concerns about the spread of misinformation through deepfaked robocalls, videos and more are on the rise. California recently enacted legislation to try to curtail the spread of digitally altered election-related content, after a fake video of Vice President Kamala Harris was shared by Tesla CEO and X owner Elon Musk. But it’s not just the election that’s at stake — it’s our entire democracy, argues University of Michigan-Dearborn Professor of Electrical and Computer Engineering Hafiz Malik.

Malik has been studying deepfake technology for more than 15 years and routinely provides analysis for news organizations. He recently wrote an op ed on the dangers of political deepfakes for The Hill. In the below Q&A, Malik discusses the reasons for their rise — and what must be done to minimize the grave threats they pose.

Why are we seeing a rise in political deepfakes?

There are a number of reasons. One is the ease with which you can create deepfakes today. There are hundreds of open source or subscription-based generative AI models that allow even a non-technical person to generate deepfakes easily, which was impossible even four years ago. Second, the United States is more polarized than we’ve ever been. There is more motive to malign the other party. Third, in the last decade or so, we are seeing that if you create something, it can spread incredibly quickly on social media. Finally, we’re seeing a rise in social media influencers, which hugely increases the extent to which you can spread misinformation. An influencer can just put a deepfake or other misinformation on their feed and it can reach millions of people, as opposed to, say, an email list. For instance, Elon Musk recently posted manipulated video of Kamala Harris on X, reaching his more than 160 million followers. When you have that large of an audience, you bear responsibility. You are damaging institutions not just within the United States, but across the globe.

How serious are the threats political deepfakes pose?

The threats are incredibly serious. Deepfakes create more polarization because they perpetuate disinformation, whether it’s about new Covid variants or the war in Gaza or anything else. This disinformation spreads like wildfire, and it creates more and more division in society. Ultimately, our democratic institutions are at risk. When people start not believing in the government, they lose trust in the system. Once they lose trust in the system, they stop trusting institutions altogether. For example, the city government or the school system or the local court or the police department, they are supposed to provide services and protect us. If our trust in those institutions is eroded, then basically we will be seeing chaos in society.

What are the implications beyond the U.S. – and beyond elections?

Geopolitics are at play and bad actors want to create chaos in certain countries. Wherever there were elections this year, there was some trace of deepfakes. We saw it from South Africa to India to Moldova.

Beyond elections, we know, for instance, that before Russia’s invasion of Ukraine, a deepfake was created of Ukrainian President Zelensky. US intelligence warned the Ukrainians and they were able to inform the public: if you see the surrender video of Zelensky, do not believe it.

What can be done to minimize these threats?

There's no silver bullet that can solve this problem. We need a multi-pronged approach here in the United States. Tech companies need to develop more robust technology to detect deep fakes. Right now, the bad actors have too much power. They are not only creating but also tampering with created content and then bypassing the checks and balances. We need reliable technology that can detect deepfakes in real time. Technology companies also need to develop better traceability technology — also known as digital watermarking — that is robust enough that, even with content tampering, you can still detect which platform was used to generate the deepfake and then find the culprit.

We also need to come up with a legislative agenda to tackle this problem. The thing is, there's a genuine business case behind the use of generative AI. There are many business opportunities out there. You don’t want to inhibit those, but we need responsible regulation to discourage the negative uses. The EU has done this and we need to follow their lead. And regulators need to enforce those policies. Social media companies so far have been operating with a wild west mentality, saying, “Hey, we are not going to moderate.’ Perfect. Don’t moderate. But the thing is, if you know for sure something is deepfaked content, then why is it still on your platform, especially when it is contributing to disinformation and misinformation? That is not a blurred line. If it is disinformation, it is disinformation. These platforms need to take the responsibility and take that content down.

We also need better awareness efforts. The public needs the proper education to identify what is a deepfake — and what is not.

How do legislative efforts in the U.S. compare with those of the European Union?

EU policies have been much more progressive. They are years ahead of us on this agenda. They have recognized this problem. I mean, we did also but our legislative model is being driven more by politics than by the science and technology. We are far ahead of the EU in developing technology but our policy and regulation agenda is mainly motivated, unfortunately, by politics. We need to get out of that mindset and show that the public and society at large are going to be protected.

###