Sunday, 31 August 2025
31.3 C
Singapore
34.1 C
Thailand
25.6 C
Indonesia
28.1 C
Philippines

Misinformation researcher admits AI errors in court filing

Misinformation expert Jeff Hancock admits AI errors in a court filing, defends arguments, and regrets citation mistakes caused by ChatGPT.

A leading misinformation expert has admitted that he used ChatGPT to assist with drafting a legal document, which led to errors that critics say undermined the filing’s reliability. Jeff Hancock, founder of the Stanford Social Media Lab, acknowledged the mistakes but insisted they did not affect the document’s core arguments.

The case and the controversy

Hancock’s affidavit was submitted to support Minnesota’s “Use of Deep Fake Technology to Influence an Election” law, which is currently under challenge in federal Court. The law is being contested by Christopher Khols, a conservative YouTuber known as Mr. Reagan, and Minnesota state Representative Mary Franson. Their legal team flagged the filing, alleging that some of its citations didn’t exist and calling the document “unreliable.”

In response, Hancock filed a follow-up declaration admitting to using ChatGPT to help organise his sources. While he denies using the AI tool to write the document itself, he conceded that errors in the citation process were introduced due to the AI’s so-called “hallucinations.”

Hancock’s defence

In his latest statement, Hancock defended the overall integrity of his filing. “I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it,” he said. He emphasised that his arguments were based on the most up-to-date academic research and reflected his expert opinion on how artificial intelligence influences misinformation.

Hancock explained that he used Google Scholar and GPT-4 to identify relevant research articles. While this process aimed to combine his existing knowledge with new insights, it inadvertently led to two non-existent citations and one with incorrect authors.

Regret but no intent to mislead

Hancock expressed remorse for the errors, stating, “I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused.” However, he firmly stood by the document’s main points, asserting that the errors do not diminish the substance of his expert opinion.

The incident highlights ongoing concerns about the risks of relying on AI tools in sensitive contexts. Although such tools can speed up research and drafting, they can also generate errors that compromise the credibility of the work they support.

As the legal challenge progresses, it remains unclear how the Court will view Hancock’s affidavit and whether the acknowledged errors will impact the case.

Hot this week

Thinking Machines partners with OpenAI to accelerate AI adoption in Asia Pacific

Thinking Machines partners with OpenAI to expand enterprise AI adoption across Asia Pacific with training, app design, and leadership programmes.

PlayStation announces Ghost of Yotei Gold Limited Edition PS5 bundle

PlayStation unveils the Ghost of Yotei Gold Limited Edition PS5 bundle and accessories, with pre-orders set to open in Singapore on 4 September.

Google halts development of Pixel tablets

Google has paused Pixel tablet development again, stepping away from a growing market dominated by Apple, Xiaomi, Samsung and Huawei.

Google’s AI glasses may be manufactured in Taiwan, HTC tipped as possible partner

Google’s first AI glasses may be manufactured in Taiwan, with HTC tipped as a strong contender to produce the device.

Snapchat introduces new app promotion tools to improve advertiser performance

Snapchat launches new app promotion tools, including Sponsored Snaps, tCPA bidding, App End Cards, and playable ads, to enhance user engagement.

Researchers show how 5G phones can be downgraded to 4G in a new cyberattack

Researchers have revealed a toolkit that can downgrade 5G phones to 4G, exposing them to known security flaws and raising concerns about mobile security.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Related Articles

Popular Categories