IFCN

AI biggest challenge of fact-checking elections around the world – experts

Lorenz Pasion

This is AI generated summarization, which may have errors. For context, always refer to the full article.

AI biggest challenge of fact-checking elections around the world – experts

Vanja Cerimagic/International Fact-Checking Network

Tech platforms need to be more involved in developing tools and not just 'pay lip service'

The fast evolution of artificial intelligence is seen as the biggest challenge in fact-checking elections around the world, members of the International Fact-Checking Network (IFCN) said on Wednesday, June 26, at the Global Fact 11 conference in Sarajevo, Bosnia and Herzegovina.

In panel discussion entitled, “Defending Truth at the Ballot Box: Fact-Checkers’ Frontline Role,” Global Fact attendees shared their struggles in fighting disinformation in their respective elections, pointing out that content made with AI tools are more difficult to prove as fake.

Jency Jacob, managing editor of India-based organization BOOM, said countering the danger of deep fakes is “like fighting tanks with sticks and stones” as technology gets more complex.

“Some of the videos that we have seen this election, while fact-checkers do really have the ability to look at it with naked eye and figure it out, but that’s not gonna be enough when writing our story and why it’s a deepfake, so we really need good tools to do that,” Jacobs said.

Uphill battle vs. AI

Warning other countries that are yet to have their elections this year, Jacob told the IFCN community that “over 70%” of AI-generated disinformation during India’s recently-concluded national elections were “voice clones” that, he said, is a cheaper way to produce false claims. (READ: Record-breaking: At least 50 elections in 2024 – from US to Indonesia to Taiwan)

The India-based fact-checker said he observed fact-checking tools are having a “hard time” identifying if a voice is a clone or a deepfake when ambient noise of music is added, making it more difficult to debunk audio claims.

Agreeing with Jacob, Politifact editor-in-chief Katie Sanders also said that AI autodetection tools are “spotty” and shared with the IFCN community that voice clones are a “factor to watch” in the US presidential polls set in November 2024.

Sanders said that AI generated content that fact-checkers do not have enough primary material to debunk because fact-checkers are limited in what they can do about it.

“Without the helpful tools, I don’t think we are quite ready for that…So much of our debunking now is just showing the full context,” Sanders said.

Andrew Dudfield, Head of AI for United Kingdom-based FullFact and moderator of the Global Fact panel discussion, said that despite exhausting all of their organization’s efforts to debunk a synthetic audio, it will always feel insufficient because detection tools are not completely reliable.

“In FullFact, we are at the point of we can speak to experts, run it through detectors and we cannot say, authoritavely, whether this is synthetic audio or not. When fact-checks have to say, as we often do ‘it’s complicated, we are not 100% certain,’ ambiguity creeps in and that is a real challenge,” Dudfield said.

Better tools needed

Dudfield also said that fact-checkers “deserve the best technolgy” and that better detection skills is incredibly important for fact-checking.

To remedy the current limitations faced by fact-checkers against AI, Jacob said that tech platforms need to be more involved in developing tools and not just “pay lip service.”

“All the good software engineers, all the good engineers are in tech platforms so why is it so difficult for them to figure out an antidote for this?” Jacob asked. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Lorenz Pasion

author

Lorenz Pasion

Lorenz Pasion is a researcher at Rappler and a member of its fact-check team that debunks false claims that spread on social media.