Diogo Cortez comments on the dangers of artificial intelligence in democracies
Google will require that political ads using artificial intelligence be accompanied by a warning about altered images or sounds.
The rule will apply to political ads on YouTube or other company platforms. The warning should be in an easy-to-see location on the screen. If the rule is not respected, the ad will be removed.
Google isn’t completely banning AI in political ads. Artificial intelligence can be used to alter images, correct video colors and blemishes, background adjustments, and so on.
If the use of AI is not relevant to the message, Google will not ban its use.
The new rule will take effect in November.
Next year, there are elections in the United States, India, South Africa, the European Union and other regions where Google is already conducting a verification process for election advertisers.
Fake images, videos, or audio files are nothing new in political advertising, but generative AI tools make these fakes easier and more realistic.
Some presidential campaigns in the 2024 race, including that of Republican Florida Governor Ron DeSantis, are already using the technology.
In April, the Republican National Committee released an ad generated entirely by artificial intelligence that showed a version of what the future of the United States would look like if President Joe Biden were re-elected. He used fake but realistic photos showing shuttered shops, armored military patrols on the streets and waves of migrants panicking.
And in June, DeSantis’ campaign shared an ad against Donald Trump that used AI-generated images of the former president embracing Anthony Fauci, the leader of the US response to the pandemic, and someone criticized by the country’s right.
State regulation
Last month, the Federal Election Commission began a process to regulate artificial intelligence-generated deepfakes (i.e., fake photo-realistic images) in political ads ahead of the 2024 election. These deepfakes can mimic the voice and likeness of politicians — in other words, they can make a realistic video that includes something we haven’t seen before. The politician himself never says so.
Democratic Senator Amy Klobuchar said in a statement that Google’s announcement was a step in the right direction, but “we can’t just rely on voluntary commitments.”
Several states have also discussed or passed legislation related to deepfake technology.
This decision by Google may lead to other companies adopting similar policies. Facebook and Instagram, which are controlled by Meta, do not have a specific rule for AI-generated political ads, but they do restrict “fake, manipulated or transformed” audio and images used in misinformation. TikTok does not allow any political ads. Social Network X, formerly known as Twitter, did not immediately respond to an email request for comment.
“Friendly zombie fanatic. Analyst. Coffee buff. Professional music specialist. Communicator.”