If you've noticed an increase in the number of suspicious links in Google search results, know that you're not alone. The company even admits it could do more to stop tactics widely used to manipulate its algorithms. It has just announced several changes to reduce the visibility of low-quality pages or remove them from results altogether.
This set of fixes is called the March 2024 Core Update, and is based on algorithmic tweaks made by Google Implementation begins in 2022 To prevent suspicious sites from competing with useful pages when people use your search engine. In total, the company claims that these adjustments should reduce the volume of “low-quality, unoriginal content” by 40%.
Google has already penalized sites that use AI to produce large amounts of poor quality content, but are highly optimized to rank well in results.
And with the advent of large language models, such as GPT-4, from OpenAI, and Gemini, from Google itself, filling a website with AI-generated material has never been easier.
But instead of focusing specifically on pages that use AI in these efforts, the company now says it will focus on containing low-quality content that ranks highly, regardless of the techniques used.
The modifications rely on the learning of thousands of human evaluators trained to analyze search results in the same way as users.
“Generative AI is a really valuable tool for creatives, and there's nothing wrong with using it to create the content you present to your users,” notes Bandu Nayak, Google's vice president of research who oversees quality and rankings.
“The problem arises when you start doing it at scale, not to serve users, but to increase your position in search results.” Whether the new policies suggest automation or not, it's hard to imagine someone mass-producing content, with no regard for quality, and not relying on AI to do most of the work.
In some cases, suspicious external pages appear on reputable sites to take advantage of their position in search results. An example, according to Google, is “payday loan reviews on trusted educational sites.”
The company will begin treating these pages as spam, with two months' notice to give the sites in question a chance to correct their behavior. Furthermore, it states that it will take action against website owners who acquire old, reputable domains and then reopen them as dumps for low-quality content.
But even with stricter policies, using algorithms to enforce them will not be effective 100% of the time. Therefore, modifications rely on the learning of thousands of human evaluators trained to analyze search results in the same way as users.
With the advent of large language models, it has never been easier to populate a website with AI-generated material.
The company shows two sets of results side-by-side — one with no changes and one with suggested adjustments — and asks them to decide which is of higher quality. To ensure a certain degree of consistency, raters review the Google Search Quality Rubric. “It serves as a guide to the search engine.”
With the new measures in place, will users see an improvement in their Google experience? Nayak thinks so. But she realizes that people are less likely to like good results than to be upset by bad results.
“No one says, ‘I did my research and it worked out well,’” Nayak says. “Because, of course, that's what it's supposed to be for. But they always notice when the results aren't good.”
The less they notice, the stronger the evidence that Google's anti-spam initiative is working.
About the author
Harry McCracken is Fast Company's technology editor based in San Francisco. In a past life, he was editor of Time magazine, founder and editor… know more
“Friendly zombie fanatic. Analyst. Coffee buff. Professional music specialist. Communicator.”