Google Researchers Warn AI Could Prompt 'Distrust' Of Information In Ads, Content

Google researchers warn in a report that "mass production of low quality, spam-like and nefarious synthetic content" created by AI may promote distrust of all digital information. 

The paper presents tactics in generative AI (GAI) misuse. The researchers observed and analyzed approximately 200 incidents of misuse reported between January 2023 and March 2024.

The group identifies patterns such as potential motivations, strategies, and how attackers leverage and abuse system capabilities across images, text, audio, and video.

"If unaddressed, this contamination of publicly accessible data with AI-generated content could potentially impede information retrieval and distort collective understanding of socio-political reality or scientific consensus," researchers wrote in the June paper, titled Generative AI Misuse: A Taxonomy of Tactics and Insights From Real-World Data.

advertisement

advertisement

Google DeepMind researchers acknowledge they are seeing cases of liar's dividend, "where high profile individuals are able to explain away unfavorable evidence as AI-generated, shifting the burden of proof in costly and inefficient ways," the research says.

The second most common goal behind GAI misuse was to monetize products and services -- found in 21% of the reported cases. Driven by profit, actors used tactics including content scaling, amplification and falsifying information.

Content farming was also shown to be prevalent in the report. This strategy primarily involved private users and sometimes small corporations to create low-quality AI-generated articles, books and product ads for placement on websites such as Amazon and Etsy in order to cut costs and capitalize on advertising revenue.

Non-consensual intimate imagery also represented a significant portion of monetization-driven misuse. In nearly all of these misuse cases, image- and video-generation tools were used to create and sell sexually explicit videos of celebrities who did not consent to the production of that content, or to “nudify” them as a paid service.

And while advertising and content seems to play a major role, researchers found the most common goal for exploiting GAI capabilities in the past year was to shape or influence public opinion, which comprised 27% of all reported cases.

In those instances, researchers saw actors deploy a range of tactics to distort the public’s perception of political realities.

These included impersonating public figures, using synthetic digital personas to simulate grassroots support for or against a cause and creating falsified media.

The majority of cases in the dataset used to conduct research involved the generation of emotionally charged synthetic images around politically divisive topics, such as war, societal unrest or economic decline.

The research highlights real-world examples of how GAI can be misused and potential risks of the technology as GAI continues to evolve.


Next story loading loading..