AI Bias in Voting
In the 2024 Presidential election between Donald J. Trump and Kamala Harris, we’re witnessing how sophisticated AI-driven disinformation campaigns continue to target vulnerable communities, echoing the experiences of voters like my friend, Eva from 2020. In ch 9 of, Hidden in White Sight: How AI Can Deepen and Empower Systemic Racism, I describe how attackers used AI to manipulate the voting process and discourage voters like Eva from reaching the polls.
Eva, a 63-year-old retired factory worker, was preparing to vote for the first time. However, hours before heading to the polls, she saw a Facebook post claiming her polling place was closed due to “water damage.” Other posts from supposed local voters seemed to confirm this, with pictures of “Closed” signs and accounts that appeared authentic. Attackers created these realistic profiles to appear as neighbors and friends of Eva’s, drawing her in with what seemed like firsthand experiences. To add to the confusion, some of Eva’s actual friends unknowingly shared and corroborated the false information, falling victim to the disinformation themselves.
The campaign even extended beyond social media. Local radio and television stations began reporting the closures, noting that the claims had not yet been verified. By involving real people and local media, the attackers made the fabricated story even more convincing, blurring the line between truth and disinformation. This AI-driven manipulation exploited the trust that Eva placed in her friends and familiar news sources, making her believe the polling place closure was real.
Deepfake technology—AI-generated fake images and videos—was central to this misinformation campaign. Attackers used it to create realistic social media profiles and posts that looked like they came from Eva’s neighbors and local voters. These AI-driven personas, complete with profile pictures, comments, and interactions, felt personal and familiar. By building up these fake profiles over time, the attackers created a network that appeared trustworthy, fooling even Eva’s actual friends, who unwittingly shared and supported the disinformation.
As we face another pivotal election in 2024, it’s critical to recognize that AI-driven disinformation remains a powerful tool for voter suppression. The tactics have only grown more sophisticated, targeting individuals and communities with even greater precision. Today’s disinformation campaigns exploit existing biases and vulnerabilities, creating new obstacles to democratic participation.
AI holds immense potential, but without ethical oversight, it can also be weaponized to undermine democracy. Let’s advocate for policies and platforms that combat disinformation, educate voters, and ensure transparency in AI applications. We cannot allow the voices of communities like Eva’s to be silenced by algorithms. To get Eva’s full story – https://a.co/d/6I9irBs