Wednesday, 12 November 2025
26.8 C
Singapore
27.7 C
Thailand
22 C
Indonesia
28.5 C
Philippines

Anti-deepfake declaration faces scrutiny over possible AI involvement

Minnesota's anti-deepfake law faces controversy as an affidavit supporting it shows signs of AI-generated text with non-existent citations.

A legal battle over Minnesota’s “Use of Deep Fake Technology to Influence an Election” law has taken an unexpected turn, raising questions about the role of artificial intelligence (AI) in its proceedings. Lawyers challenging the law have pointed out that an affidavit supporting the legislation appears to include text that AI might have generated. This revelation, reported by the Minnesota Reformer, suggests that AI tools like ChatGPT or other large language models (LLMs) may have played a role in creating parts of the document.

Evidence under scrutiny

The affidavit in question was submitted by Jeff Hancock, founding director of Stanford University’s Social Media Lab, at the request of Minnesota Attorney General Keith Ellison. However, the content of Hancock’s declaration has raised eyebrows, particularly its references to two studies that seem to be entirely fictitious.

One of the cited studies, The Influence of Deepfake Videos on Political Attitudes and Behavior, was allegedly published in 2023 in the Journal of Information Technology & Politics. However, a search for this study in that journal and elsewhere has yet to yield results. Another cited work, Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance, also appears non-existent. These inconsistencies suggest that an AI tool may have fabricated the sources.

The lawyers representing state Representative Mary Franson and conservative YouTuber Christopher Khols (known online as Mr Reagan) expressed their concerns in a legal filing. They stated, “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT.”

Implications for the affidavit’s credibility

The suspicious citations cast doubt on the reliability of Hancock’s affidavit. The filing from Franson and Khols’ legal team argued that the apparent AI-generated sources undermine the credibility of the entire document. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever,” the filing noted.

This revelation has added complexity to an already contentious case, which focuses on regulating the use of deepfake technology in elections. Deepfakes, which use AI to create realistic but fabricated videos, are a growing concern due to their potential to spread misinformation and manipulate public opinion.

This case highlights the challenges posed by the increasing reliance on AI in various fields, including legal and academic work. While AI tools like ChatGPT can assist with drafting documents and generating ideas, they are flexible and may produce inaccurate or entirely fictional information. Such errors can have serious implications, particularly in legal proceedings where accuracy is paramount.

The Minnesota case demonstrates the importance of verifying AI-generated information before it is used in critical contexts. As the legal challenge progresses, the role of AI in creating Hancock’s affidavit will likely remain a point of contention, potentially influencing the court’s perception of the evidence.

Hot this week

Google Maps adds Gemini for hands-free conversational navigation

Google Maps now features Gemini integration, offering conversational navigation, landmark-based directions, and smarter AI-powered tools.

VAST Data signs US$1.17 billion partnership with CoreWeave to power next-generation AI

VAST Data signs US$1.17 billion deal with CoreWeave to expand AI infrastructure and power next-generation AI workloads.

Tenable reveals seven ChatGPT vulnerabilities that expose users to data theft and hijacking

Tenable identifies seven ChatGPT flaws exposing users to data theft and manipulation through indirect prompt injection attacks.

Singapore businesses expand globally as one in four sell internationally with PayPal

One in four Singapore businesses now sell internationally via PayPal, led by gaming, beauty, and fashion exports worth over US$1.6B.

Hitachi Vantara launches Hitachi iQ Studio to accelerate enterprise AI adoption

Hitachi Vantara launches Hitachi iQ Studio to simplify and scale AI deployment with no-code tools and enterprise-grade governance.

Hybrid AI emerges as the new standard for financial services, report finds

A Cloudera and Finextra report finds hybrid AI has become essential for financial services, with 91% citing it as highly valuable.

SIAS celebrates corporate excellence at Investors’ Choice Awards 2025

SIAS honours over 40 companies and leaders for excellence in governance, sustainability and transparency at the Investors’ Choice Awards 2025.

Aster and Aether Fuels to build Singapore’s first commercial sustainable aviation fuel plant

Aster and Aether Fuels to build Singapore’s first commercial-scale sustainable aviation fuel plant at Pulau Bukom.

H3 Zoom secures US$1.8 million in Series A funding led by JRE Ventures

H3 Zoom raises US$1.8M in Series A funding led by JRE Ventures to expand AI-powered infrastructure inspection across Asia.

Related Articles

Popular Categories