Google intends to implement a new rule for ads during election periods: Any political ad broadcast on the company’s platforms must indicate when AI-generated images or audio are being used. Information from BBC.
The new rules are set to come into effect from November, one year before the US presidential election. According to a Google spokesperson in an interview with the BBC, the decision will be in response to “the prevalent growth of tools that generate artificial content”.
In this scenario, each ad that contains artifacts representing a real person or event must have a prominent warning about the use of Synthetic Content. Google recommends some warnings like “This image does not represent real events” or “This video content was artificially generated”.
Google’s advertising policies already include a series of restrictions on misinformation and fake news, but there has been no specific approach to generative AI creations. This applies to images generated by artificial intelligence – as was the case with the viral images of Donald Trump’s arrest – and the use of deepfakes in videos.
Google is also developing an AI image recognition tool
In addition to the rules, Google is also making it easier to identify AI-generated content. Its subsidiary DeepMind announced SynthID technology, which is able to act as a kind of invisible watermark and signal when an image is generated by artificial intelligence.
source: BBC
“Friendly zombie fanatic. Analyst. Coffee buff. Professional music specialist. Communicator.”