News

Disinformation and AI: the industrial revolution?

2024-02-22. NewsGuard research shows that tech improvements will shortly allow AI to foster “fake news” at an unprecedented scale, in unbelievable volumes and seizing advertising revenues from news media while plagiarising their content.

by Elena Perotti elena.perotti@wan-ifra.org | February 22, 2024

NewsGuard research shows that tech improvements will shortly allow AI to foster “fake news” at an unprecedented scale, in unbelievable volumes and seizing advertising revenues from news media while plagiarising their content.

Chine Labbe and Virginia Padovese – NewsGuard VPs jointly responsible for Europe, Canada, Australia and New Zealand – were recently invited to speak at one of the regular Media Policy Briefings organised by WAN-IFRA for the Directors of member Associations. They explained research findings that show how Artificial Intelligence will be likely used to exacerbate disinformation in three separate, extremely concerning ways.

The image that illustrate this article is from AI MEDIAWATCH a public reading list on AI trends in news publishing curated by WAN-IFRA.

1 – “Fake news” at an unprecedented scale, for very little cost.

The continuous improvements underway in AI tools are destined to destroy entry barriers in the disinformation industry. The dissemination of AI generated video, audio and images have greatly accelerated in the past few months, in conjunction with the war in the Gaza strip. Indeed, the conflict has been dubbed “the first AI war” echoing how the Vietnam war was called “the first televised war”.

The tools are not new, but they became significantly more advanced and widespread since October 7. Just three months ago deep fakes carried obvious mistakes, particularly in depicting limbs, hands, and feet. Now they are much more accurate and spread even faster. Notably the technology has had real breakthroughs when it comes to the release of tools for lip-syncing and voice-over, making it possible to use publicly available means to clone voices without permission.

The acceleration of deep fakes also impacted the Russia/Ukraine conflict: NewsGuard analysts found in a span of ten days in November five extremely realistic videos featuring President Zelensky fake-ordering soldiers to take the arms to their commanders, under logos of legitimate Ukrainian news media. All this is particularly worrying in a year like 2024, where more than half of the world population will participate in national elections.

Meanwhile, prominent AI tools are still concerningly prone to spreading disinformation. In August, NewsGuard released the findings of their “red-teaming” repeat audit of GPT-4 and Google Bard, where the models were fed 100 well known, debunked false narratives.  ChatGPT-4 responded with text that incorporated those well-known conspiracy theories 98% of the times, while Bard did slightly better at 80%.

2- Proliferation of new-gen content farms

A new generation of content farms in the form of AI-driven news websites has emerged. They rely on little to no human supervision and produce large amounts of low-quality content for the only purpose of ranking high in search engines, produce vast amounts of clickbait articles and optimize advertising revenues.

In May 2023 NewsGuard launched their AI tracking center, with the double objective to identify content farms, and track false narratives produced by artificial intelligence tools. A website is considered an Unreliable AI-generated News Site (UAINS) when it meets these four conditions:

  1. There is clear evidence that a substantial portion of the site’s content is produced by AI.
  2. There is strong evidence that the content is being published without significant human oversight. For example, numerous articles might contain error messages or other language specific to chatbot responses, indicating that the content was produced by AI tools without adequate editing.
  3. Obvious attempt to mislead the reader into believing the news website is legitimate and produced by human writers or journalists (layout, generic or benign name, address, topics)
  4. The site does not clearly disclose that its content is produced by AI.

At the time of the launch of the tracker, in May 2023, NewsGuard had identified forty-nine content farms that matched these criteria. Since then, that number has grown exponentially to 731 in fifteen different languages as of February 2024. Almost half of these websites (328 as of Jan. 5, 2023) carry programmatic advertising from well-known, likely oblivious brands.

3- Plagiarism bots

In a study from August 2023 NewsGuard found thirty-seven sites that used AI to repackage articles from mainstream news sources without providing credit, including CNN, The New York Times and Reuters. These content farms featured programmatic ads from blue-chip brands, which are thus unknowingly helping to fund the practice of using AI to deceptively reproduce content from established news sources. The implication of unfair competition is self-explanatory: these websites lift content, scramble it beyond recognition, and publish hundreds of articles a day virtually in real time at zero cost.

The analysts could identify as plagiarising only the websites publishing content that included tell-tale error messages such as “as an AI language model i cannot rewrite this article” or “I cannot determine the content that needs to be re written”. Of course the issue is easy to fix and it can be expected that soon the bad actors will find a way to automatically delete the string while keeping stealing content. Plagiarism detectors failed 79% of the times to spot the AI-copied text. Jen Dakin, a Grammarly spokesperson, told NewsGuard in an email that the company’s plagiarism tool can “detect content that was pulled verbatim from online channels” but “[cannot] identify AI-generated text.”

Share via
Copy link