As the general election campaign begins in earnest, we can expect disinformation attacks to target voters, especially in communities of color. This has happened before: In 2016, for example, Russian disinformation programs focused on black Americans, created Instagram and Twitter accounts disguised as black voices and produced fake news sites like blacktivist.info, blacktolive.org, and blacksoul.us.
Technological advances will make these efforts more difficult to recognize. These same fake accounts and websites feature hyper-realistic videos and photos intended to sow racial division and mislead people about their voting rights. With the advent of generative AI, this has become possible at little or no cost, adding to the kind of misinformation that has always targeted communities of color.
It's a problem for candidates, election offices and voter education groups in the coming months. But voters themselves will ultimately have to figure out what is real news or fake news, whether real or generated by artificial intelligence.
For immigrants and communities of color — who often face language barriers, distrust democratic systems and lack access to technology — the challenge is likely to be even more significant. Across the country, and especially in states like California with large immigrant communities and people with limited knowledge of the English language, the government needs to help these groups identify and avoid misinformation.
Asian Americans and Latinos are particularly at risk. About two-thirds of America's Asian and Pacific Islander population are immigrants, and a Pew Research Center report states that “[86%] of Asian immigrants ages 5 and older say they speak a language other than English at home. The same dynamics apply to Latinos: only 38% of the foreign-born Latino population in the United States is proficient in English.
Targeting non-English speaking communities has many advantages for those spreading misinformation. These groups are often isolated from mainstream news sources that have the greatest resources to debunk deepfakes and other misinformation, and they prefer to engage online in their native languages, where moderation and fact-checking are less prevalent. 46% of Latinos in the US use WhatsApp, while many Asian Americans prefer WeChat. Wired magazine reported that the platform is “used by millions of Chinese Americans and people with friends, family or businesses in China, including as a political organizing tool.”
Misinformation targeting immigrant communities is poorly understood and difficult to track and counter, yet it is becoming easier and easier to create. In the past, producing fake content in languages other than English required human-intensive work and was often of low quality. Now, AI tools can generate hard-to-trace misinformation in language at lightning speed and without the vulnerabilities and measurement problems imposed by human limitations. However, much of the research on misinformation and disinformation focuses on uses of the English language.
Attempts to target communities of color and non-English speakers with misinformation are fueled by many immigrants' heavy reliance on their cell phones to access the Internet. User interfaces on mobile devices are particularly vulnerable to misinformation because many desktop designs and branding elements are scaled down in favor of content on small screens. With 13% of Latinos and 12% of African Americans relying on mobile devices to access broadband, in contrast to 4% of white smartphone owners, they are more likely to receive and share false information.
Previous efforts by social media companies to counter voter misinformation have failed. Meta's announcement in February that it would label AI-generated images on Facebook, Instagram, and Threads is a positive but small step toward eliminating AI-generated misinformation, especially for racial and immigrant communities that may know little about its effects. . It is clear that a stronger government response is needed.
The California Initiative for Technology and Democracy, or CITED, where we serve on the board, will soon unveil a legislative package that will require broader transparency of AI-produced content, and ensure social media users know what video, audio, and image content is. Made by artificial intelligence tools. The bills would also require labeling of AI-powered political misinformation on social media, ban campaign ads close to elections from using the technology and restrict anonymous trolls and bots.
Additionally, CITED plans to hold a series of community forums throughout California with partner organizations rooted in their regions. The groups will speak directly to leaders in communities of color, labor leaders, local elected officials and other trusted messengers about the dangers of AI-generated false information likely to circulate this election season.
The hope is that this information will be conveyed at the community level, making voters in the state more aware and skeptical of false or misleading content, and building confidence in the election process, election results, and our democracy.
Bill Wong is a campaign strategist and author of Better to Win: Hard-hitting Lessons in Leadership, Influence, and the Craft of Politics. Mindy Romero is a political sociologist and director of the Center for Inclusive Democracy at the University of Southern California's Price School of Public Policy.