|
Read more at The Atlanta Journal-Constitution: https://www.ajc.com/politics/how-can-you-detect-ai-generated-deepfakes-ahead-of-election-day/HTEZCI6BOJDFHHMZNSOYZMYCTU/
As AI technology advances, its power to produce realistic voice and video deepfakes poses new challenges to election integrity. This year, New Hampshire voters received calls from what sounded like President Joe Biden, advising them to delay voting until November. But the calls were fake, generated through voice-cloning AI. Political consultant Steven Kramer allegedly orchestrated this scheme, leading to a $6 million fine from the FCC and felony charges in New Hampshire for voter suppression and impersonation.
Kramer claimed his intent was to raise awareness about the dangers of AI, but experts warn the threat is real and growing. Joe Sutherland, CEO of J.L. Sutherland & Associates, emphasizes the risk of AI misinformation ramping up near Election Day, when false narratives are hardest to debunk. “On Election Day, there’s not much time to correct misinformation,” says Sutherland, who also worked in the Obama White House.
Deepfakes can prey on people’s confirmation biases, reinforcing pre-existing beliefs and reducing skepticism. To combat this, experts recommend a few strategies for identifying deepfakes:
- Use detection tools: Free tools like TrueMedia.org help identify suspicious media.
- Look for errors: Misspellings, odd grammar, and unattributed claims can be red flags.
- Analyze visuals: Check for disproportionate or distorted features in images and awkward movements or lip-syncing in videos.
- Verify with trusted sources: Local and state election officials remain reliable for fact-checking.
Awareness and vigilance are essential as voters face an election landscape increasingly shaped by AI-powered misinformation.