Google announced on Wednesday that it will require political advertisements to disclose when they feature synthetic content, such as images, videos, or audio generated by artificial intelligence (AI).

The new rule, which will take effect in November, is an addition to Google’s political content policy that covers Google and YouTube. It aims to prevent the use of AI-generated content that could mislead or manipulate voters in the upcoming 2024 US presidential election and other major elections around the world.

- ADVERTISEMENT -

According to a blog post by Google, political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad. The disclosure must be visible and audible and must state that the content is synthetic or altered.

Google said the policy will not apply to synthetic or altered content that is “inconsequential to the claims made in the ad”, such as image resizing, color corrections, or background edits that do not create realistic depictions of actual events.

The policy update comes as AI technology has advanced rapidly, allowing anyone to cheaply and easily create convincing AI-generated text and, increasingly, audio and video. Digital information integrity experts have warned that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.

AI-generated images have already begun to crop up in political advertisements. In June, a video posted by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by AI, showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s then-top infectious disease specialist, were tricky to spot. They were shown alongside real images of the pair and with a text overlay saying, “real-life Trump”.

In April, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery”, but some potential voters in Washington, DC, to whom CNN showed the video, did not notice it on their first watch.

In its policy update, Google said it will require disclosures on ads using synthetic content in a way that could mislead users. The company said, for example, that an “ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need a label.

Google said it hopes that its policy will help users make informed decisions about the political content they see online. The company also said it is committed to working with other industry partners and regulators to improve the safety and transparency of AI technologies.

“With this update, we’re building on our existing policies and practices to ensure users have access to helpful information about the political ads they see on our platforms,” said Scott Spencer, Vice President of Product Management at Google Ads.

Leave A Reply

Exit mobile version