Reddit is a valuable source for real-life information, it has become important point for Google, which recently entered a $60 million deal with the platform. This partnership allows Google to use Reddit data for AI model training and feature Reddit results prominently in Google Search as announced in February. As a result, Reddit links have frequently outranked the original websites they remark about.
Reddit’s New Policy Blocks Content from Non-Google Search Engines
However, Reddit has started blocking its content from appearing properly in other search engines. This change was first emphasized by 404 Media, which acclaimed that Reddit’s robots.txt file now prevents all bots from scraping its site.
Reddit’s adjustments to its robots.txt file were driven by grow in scraping activity by commercial entities. Although Reddit hasn’t directly linked these changes to AI training, it’s implied. Accordingly, search engines other than Google struggle to show accurate Reddit results.
Reddit’s Policy Shift: Blocking Other Search Engines While Partnering with Google
A Reddit official clarified to 9to5Google that the blocking of other search engines is unconnected to the Google partnership. Instead, it’s a result of Reddit’s new policy targeting all crawlers unwilling to refrain from using Reddit data for AI training. Reddit is working out with various search engines to reach agreements on data usage, but no comprehensive deals have been made yet.
Search engines like Bing, DuckDuckGo, Mojeek, and Qwant are experiencing unfinished or outdated Reddit results. Meanwhile, Kagi, a paid search engine, still shows Reddit data by purchasing its search index from Google, leveraging Google’s access through the partnership.
In the summary, Reddit’s plan shift aims to control the use of its data, particularly for AI reasons, remarkably impacting how its content looks across search engines, except Google.