Can AI Deepfakes Be Legislated? InfoSci Professor Bryan Heidorn Provides Insight for Tucson CBS Affiliate

Aug. 15, 2024
Image
Person holding smartphone with "deepfake" text

Image courtesy Adobe Stock.

   
Artificial intelligence-generated deepfakes are a growing problem for elected officials and citizens alike. A new Arizona law hopes to tackle AI deepfakes, and College of Information Science Professor Bryan Heidorn, director of the Center for Digital Society and Data Studies, provides insight.

Image
Bryan Heidorn

Bryan Heidorn, Professor, Associate Dean for Research and Graduate Academic Affairs and Director, Center for Digital Society and Data Studies.

In a TV news story by Tucson’s CBS affiliate KOLD 13 News, Heidorn notes that generative AI is advancing so quickly that it is becoming harder to identify AI-generated deepfakes—highly realistic but manipulated images, videos or audio recordings. While some deepfakes still contain telltale flaws—such as extra fingers or unnatural physical features—these errors are diminishing as the technology improves. In the new story, Heidorn also touches on the potential future of deepfake detection, noting that AI itself is currently being used to identify these AI-manipulated images.

KOLD’s story, and Heidorn’s insight, are in response to new legislation in Arizona aimed at protecting citizens and political candidates from the growing threat of AI-generated deepfakes. These deepfakes can be harmless but, increasingly, they are being weaponized to damage reputations, create fake pornography and even influence elections. The state's new law, House Bill 2394, signed by Governor Katie Hobbs in May, provides legal recourse for those harmed by such malicious content.

The bill, sponsored by Republican State Representative Alexander Kolodin, offers protection for victims under specific circumstances, including when Arizona citizens and candidates for public office who are:

  1. Depicted nude or engaging in a sexual act
  2. Depicted committing a crime
  3. Expected to suffer personal or financial hardship from the deepfake
  4. Likely to suffer irreparable reputational damage

Private citizens now have the ability to obtain injunctions and seek damages more swiftly. However, if the publisher of the deepfake removes the content within 21 days of a court request, no further legal action is required. Notably, politicians have fewer rights under this law; they cannot seek damages but can obtain a court declaration affirming the inauthenticity of the deepfake.

Kolodin emphasizes that the legislation aims to protect everyday people rather than suppress political expression. The law is crafted to avoid overregulating deepfakes, which could inadvertently stifle political satire and criticism, he says.

Heidorn expresses hope that future legislation might require all deepfakes to be clearly labeled, making them easier for the public to recognize. He stresses the need for caution, particularly among parents, advising them to limit the online availability of their children's images to reduce the risk of exploitation.

As Arizona's legal framework catches up with the impacts of AI, the new law represents a critical step in protecting individuals from the harmful effects of deepfakes while balancing the need for free expression.

Read or watch the full story.

Heidorn, who served as the School of Information (now College of Information Science) director from 2015 to 2019, leads graduate academic affairs and research for the college. His areas of research include natural language processing, text mining for metadata and information retrieval particularly in biodiversity literature, and museum informatics.