Google posted their 2021 Ads Safety Report. The information was posted on Google’s Ads and Commerce Blog by VP of Product Management, Ads Privacy and Safety, Scott Spencer. The annual report describes Google’s efforts to prevent malicious use of their ads platforms.
The blog post states that in 2021, Google introduced a multi-strike system for repeat policy violations. They added or updated over 30 policies for advertisers and publishers including a policy prohibiting claims that promote climate change denial and a certification for U.S.-based health insurance providers to only allow ads from government exchanges, first-party providers and licensed third-party brokers.
Google says that they removed over 3.4 billion ads, restricted over 5.7 billion ads and suspended over 5.6 million advertiser accounts. They also blocked or restricted ads from serving on 1.7 billion publisher pages, and took broader site-level enforcement action on approximately 63,000 publisher pages.
In addition, Google says it “doubled down” on their enforcement of unreliable content. They blocked ads from running on more than 500,000 pages that violated Google’s policies against harmful health claims related to COVID-19 and demonstrably false claims that could undermine trust and participation in elections.
Google also added a feature to their advertiser controls that allows brands to upload dynamic exclusion lists that can be automatically updated and maintained by trusted third parties. Google also made targeted improvements to the publisher approval process that helped Google better detect and block bad actors before they could even create accounts.
CNET reported that of the 3 billion-plus ads that were removed, over 650 million were pulled for abusing the ad network, while 280 million violated rules on adult content. Other reasons for removal were related to trademarks, gambling, alcohol, health care and misrepresentation. Google also prevented inappropriate ads from showing up on nearly 2 billion publisher pages, and over 600,000 individual publisher sites received enforcement action.
Apparently, Google has made efforts to remove your personal information from Google Search. Google will remove the following: non-consensual explicit or intimate personal images, involuntary fake pornography, “about me” content on sites with exploitative removal content, select personally identifiable information (PII) or doxing content, images of minors from Google search results, and irrelevant pornography from Google search results for your name.
They will also remove content for legal reasons, such as DMCA copyright violation reports and child sexual abuse imagery.
Overall, Google’s efforts sound like a good thing. I want to believe that Google is returning to its “Don’t Be Evil” motto. Nobody wants the types of unfortunate content listed above to be on the internet – for everyone to see – and Google should have started removing that long ago. Seems they finally got there! I also like that Google has been weeding out the bad ads that are full of misinformation. Most people don’t enjoy watching or viewing ads. The least Google could do is get rid of the worst ones before they go live.