Friday, 21 November 2025
27.9 C
Singapore
22.8 C
Thailand
21 C
Indonesia
27.4 C
Philippines

Anti-deepfake declaration faces scrutiny over possible AI involvement

Minnesota's anti-deepfake law faces controversy as an affidavit supporting it shows signs of AI-generated text with non-existent citations.

A legal battle over Minnesota’s “Use of Deep Fake Technology to Influence an Election” law has taken an unexpected turn, raising questions about the role of artificial intelligence (AI) in its proceedings. Lawyers challenging the law have pointed out that an affidavit supporting the legislation appears to include text that AI might have generated. This revelation, reported by the Minnesota Reformer, suggests that AI tools like ChatGPT or other large language models (LLMs) may have played a role in creating parts of the document.

Evidence under scrutiny

The affidavit in question was submitted by Jeff Hancock, founding director of Stanford University’s Social Media Lab, at the request of Minnesota Attorney General Keith Ellison. However, the content of Hancock’s declaration has raised eyebrows, particularly its references to two studies that seem to be entirely fictitious.

One of the cited studies, The Influence of Deepfake Videos on Political Attitudes and Behavior, was allegedly published in 2023 in the Journal of Information Technology & Politics. However, a search for this study in that journal and elsewhere has yet to yield results. Another cited work, Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance, also appears non-existent. These inconsistencies suggest that an AI tool may have fabricated the sources.

The lawyers representing state Representative Mary Franson and conservative YouTuber Christopher Khols (known online as Mr Reagan) expressed their concerns in a legal filing. They stated, “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT.”

Implications for the affidavit’s credibility

The suspicious citations cast doubt on the reliability of Hancock’s affidavit. The filing from Franson and Khols’ legal team argued that the apparent AI-generated sources undermine the credibility of the entire document. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever,” the filing noted.

This revelation has added complexity to an already contentious case, which focuses on regulating the use of deepfake technology in elections. Deepfakes, which use AI to create realistic but fabricated videos, are a growing concern due to their potential to spread misinformation and manipulate public opinion.

This case highlights the challenges posed by the increasing reliance on AI in various fields, including legal and academic work. While AI tools like ChatGPT can assist with drafting documents and generating ideas, they are flexible and may produce inaccurate or entirely fictional information. Such errors can have serious implications, particularly in legal proceedings where accuracy is paramount.

The Minnesota case demonstrates the importance of verifying AI-generated information before it is used in critical contexts. As the legal challenge progresses, the role of AI in creating Hancock’s affidavit will likely remain a point of contention, potentially influencing the court’s perception of the evidence.

Hot this week

vivo X300 Pro review: A flagship built for serious photography

A detailed look at the vivo X300 Pro’s camera system, design, battery life and everyday performance in real-world use.

Adobe to acquire Semrush for US$1.9 billion

Adobe plans to acquire Semrush for US$1.9 billion to strengthen its digital marketing and AI-driven search tools.

Belkin recalls iPhone tracking stand and power banks over fire safety concerns

Belkin recalls iPhone stands and power banks after overheating defects raise fire and burn safety concerns.

LinkedIn introduces AI-powered search to help users find the right people

LinkedIn introduces AI-powered search to help users find relevant people more quickly, starting with Premium members in the US.

Google unveils Antigravity, an agent-first coding tool built for Gemini 3

Google launches Antigravity, a new agent-first coding tool for Gemini 3 designed to enhance autonomous software development.

Google TV may introduce solar-powered remote controls

Google TV may soon feature a solar-powered remote, reducing battery waste and offering an eco-friendly solution for streaming devices.

Adobe to acquire Semrush for US$1.9 billion

Adobe plans to acquire Semrush for US$1.9 billion to strengthen its digital marketing and AI-driven search tools.

Roblox’s selfie verification hints at a more intrusive online future

Roblox’s new age verification system signals a growing shift toward identity checks across online platforms, raising safety and privacy concerns.

Google unveils Antigravity, an agent-first coding tool built for Gemini 3

Google launches Antigravity, a new agent-first coding tool for Gemini 3 designed to enhance autonomous software development.

Related Articles

Popular Categories