No Easy Fix: InfoSci Experts Discuss Bad Election Information and How to Combat It

Wednesday
Image
Abstract image of U.S. flags and stage lighting

Image by Mongkol, courtesy Adobe Stock.

 
American elections in the digital age come with a few realities that voters have come to accept, such as round-the-clock robocalls from political campaigns, candidate townhalls held over Zoom and text-message reminders about voter registration.

But modern elections in America also come with misinformation and disinformation that spread rapidly with the help of social media and other digital amplifiers.

With the presidential election nearing, University of Arizona experts in the College of Information Science explain the different types of false online information, why there are no easy solutions and what voters can do.

Misinformation vs. Disinformation

Although they often appear together, the terms misinformation and disinformation do not describe the same thing.

Image
Diana Daly

Diana Daly, Associate Professor of Practice and Associate Dean, Undergraduate Academic Affairs and Student Success.

The distinction largely comes down to intent, but that distinction can be almost meaningless in complex information ecosystems, says Diana Daly, an associate professor of practice in the College of Information Science who studies digital culture and how people share their belief systems online.

Misinformation is defined as false information spread without any intent to deceive, perhaps on accident with a genuine belief in its accuracy. Disinformation is information that is known to be false and is spread deliberately to deceive. Today, Daly added, these are often mixed and spread together, with bad actors quickly jumping on misinformation to spread disinformation.

A Kernel of Truth

Many false election-related narratives are based on some truth, which complicates the fact-checking process, Daly said.

A recent example of this, Daly said, is the false claim that Haitian migrants in Springfield, Ohio, have been eating their neighbors' pets. The falsehood was amplified by former President Donald Trump during the Sept. 10 presidential debate. His running mate, Ohio Sen. JD Vance, later admitted the claim may be false but urged supporters to continue spreading it.

Media outlets have debunked the claim numerous times, citing a statement by the city's police department that "no credible reports or specific claims" have been made matching the rumors.

In this case, Daly said, bad actors relied on truths—including that more Haitian immigrants have come to the city than the city may be prepared for, and that racism perpetuates false beliefs about immigrants in general and Haitians in particular—to conclude that immigrants were eating pets.

"Those conclusions were false, but they were based on real situations and feelings," Daly said. "Politicians then exploit those feelings and that misinformation with disinformation campaigns."

Yes, Your Eyes Might Actually Be Lyin'

Another complication of modern elections: audio, images and video generated by artificial intelligence, which seem to show politicians saying or doing things that they did not.

For example, before Vice President Kamala Harris announced her candidacy, a robocall seemingly using President Joe Biden's voice urged Democratic voters to skip the New Hampshire primary and only vote for Biden in the general election. The AI-generated voice was made to sound like Biden, and the Federal Communications Commission has since made AI-generated voices in robocalls illegal.

Image
Mike McKisson.png

Michael McKisson, Associate Professor of Practice and Director of Undergraduate Studies.

Often called deep fakes or synthetic media, fake audio, images and videos are simply the newest version of the old idea of media-based manipulation, said Michael McKisson, an associate professor of practice in the College of Information Science who teaches visual storytelling.

Not long ago, Photoshop was at the center of many media ethics discussions, said McKisson, who spent 15 years on the U of A School of Journalism faculty and is a contributing photographer to the nonprofit Tucson news outlet AZ Luminaria.

AI makes Photoshop's capabilities vastly faster and easier to achieve, he said—and an election year adds politics to the mix.

"When you're talking about politics and trying to manipulate people into voting a certain way or feeling a certain way, that's where things really become a problem," McKisson said.

He said we now live in an age where "everybody has to be a journalist," which often means being skeptical of just about everything you see or hear.

What Voters Can Do

Daly and McKisson both stress that there are no easy fixes to any of these issues. While knowing the source of a piece of information can help voters determine the source's intent, that is no easy task on the internet of 2024, Daly said.

"We're communicating in memes, so the idea that you'll know who the creator is—sometimes that's true, but even if the creator is someone intending to deceive, you might have a newscaster spreading it in their unique way, and then a friend interpreting that interpretation and so on," she said.

There are some tech tools that can help voters detect AI-generated videos and images, McKisson said, such as Google's reverse image search, which allows users to search the internet for an image they upload, and Sightengine, another online AI-detection tool.

But those tools, he added, are only useful until AI technology finds new ways around them—which seems to happen almost daily.

"People really want there to be some sort of tech fix for the tech," McKisson added. "All that becomes is a technological arms race."

Instead, McKisson said, voters consuming political information should be skeptical of anything that gives them a strong emotional reaction or the urge to share immediately. Most people would also do well to understand that these issues are likely here to stay.

"We're not going to solve it; we're going to combat it," he said.

Since so many of these issues hinge on personal belief systems, Daly said, combating misinformation and disinformation will likely require a society-wide effort. That can begin with people seeking out beliefs that don't necessarily fit their own.

"We have neighbors and coworkers who believe differently than us," she said. "So, rather than focusing on our paternalistic talk at people who believe the wrong things, we should be focusing our open-hearted talk at everyone."