Wednesday, 3 December 2025
28.5 C
Singapore
23.3 C
Thailand
21.8 C
Indonesia
27.6 C
Philippines

Anti-deepfake declaration faces scrutiny over possible AI involvement

Minnesota's anti-deepfake law faces controversy as an affidavit supporting it shows signs of AI-generated text with non-existent citations.

A legal battle over Minnesota’s “Use of Deep Fake Technology to Influence an Election” law has taken an unexpected turn, raising questions about the role of artificial intelligence (AI) in its proceedings. Lawyers challenging the law have pointed out that an affidavit supporting the legislation appears to include text that AI might have generated. This revelation, reported by the Minnesota Reformer, suggests that AI tools like ChatGPT or other large language models (LLMs) may have played a role in creating parts of the document.

Evidence under scrutiny

The affidavit in question was submitted by Jeff Hancock, founding director of Stanford University’s Social Media Lab, at the request of Minnesota Attorney General Keith Ellison. However, the content of Hancock’s declaration has raised eyebrows, particularly its references to two studies that seem to be entirely fictitious.

One of the cited studies, The Influence of Deepfake Videos on Political Attitudes and Behavior, was allegedly published in 2023 in the Journal of Information Technology & Politics. However, a search for this study in that journal and elsewhere has yet to yield results. Another cited work, Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance, also appears non-existent. These inconsistencies suggest that an AI tool may have fabricated the sources.

The lawyers representing state Representative Mary Franson and conservative YouTuber Christopher Khols (known online as Mr Reagan) expressed their concerns in a legal filing. They stated, “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT.”

Implications for the affidavit’s credibility

The suspicious citations cast doubt on the reliability of Hancock’s affidavit. The filing from Franson and Khols’ legal team argued that the apparent AI-generated sources undermine the credibility of the entire document. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever,” the filing noted.

This revelation has added complexity to an already contentious case, which focuses on regulating the use of deepfake technology in elections. Deepfakes, which use AI to create realistic but fabricated videos, are a growing concern due to their potential to spread misinformation and manipulate public opinion.

This case highlights the challenges posed by the increasing reliance on AI in various fields, including legal and academic work. While AI tools like ChatGPT can assist with drafting documents and generating ideas, they are flexible and may produce inaccurate or entirely fictional information. Such errors can have serious implications, particularly in legal proceedings where accuracy is paramount.

The Minnesota case demonstrates the importance of verifying AI-generated information before it is used in critical contexts. As the legal challenge progresses, the role of AI in creating Hancock’s affidavit will likely remain a point of contention, potentially influencing the court’s perception of the evidence.

Hot this week

The forgotten battle royale that ended a studio still deserved more than a one-month run

A look back at Radical Heights, the short-lived battle royale that showed promise but shut down after just one month.

SMRT upgrades Bishan Depot with automation to double train overhaul capacity

SMRT upgrades Bishan Depot with automation to double overhaul capacity and enhance safety, efficiency, and workforce sustainability.

ShadowV2 botnet spotted during AWS outage, researchers warn of possible return

ShadowV2 botnet briefly emerged during the AWS outage, targeting IoT devices, raising concerns about future cyberattacks.

Google DeepMind opens new AI research lab in Singapore to strengthen regional language capabilities

Google DeepMind opens a new AI lab in Singapore to boost regional language understanding, research partnerships, and real-world innovation.

AMD powers Zyphra’s large-scale AI training milestone

Zyphra trains its ZAYA1 foundation model entirely on AMD hardware, marking a major step for large-scale AI development.

OpenAI enters circular ownership deal with Thrive Holdings

OpenAI enters a circular ownership deal with Thrive Holdings, deepening ties with private equity while expanding its AI reach.

Let It Die: Inferno launches with extensive AI-generated elements

Let It Die: Inferno launches on 3 December with AI-generated voices, music, and graphics, sparking debate among fans.

Samsung introduces Galaxy Tab A11+ with larger display, AI features, and long-term software support

Samsung launches the Galaxy Tab A11+, an affordable 11-inch tablet with AI tools, long battery life, and seven years of software support.

Solera highlights AI, sustainability and leadership at Insurtech Insights Asia

Solera showcases AI innovation, sustainability initiatives and leadership programmes at Insurtech Insights Asia in Hong Kong.

Related Articles

Popular Categories