|
||
Cloudflare has introduced a new policy framework aimed at reshaping how artificial intelligence (AI) systems access online content. The “Content Signals Policy” grants website owners clearer, enforceable control over how their data is used—particularly by AI models that generate responses without attribution.
The move directly targets Google, whose AI-powered search features, including AI Overviews, rely on data collected by a single crawler. Unlike rivals such as OpenAI, which separates bots for search and AI, Google uses one system to feed both. This dual-purpose setup, Cloudflare argues, gives the tech giant an unfair advantage.
Announced by CEO Matthew Prince, the policy introduces a more nuanced form of robots.txt—a decades-old standard that governs bot access. While previously symbolic, Cloudflare now frames these preferences as contractual signals with legal implications. With Cloudflare managing about one-fifth of the internet, the new framework will automatically apply to millions of domains, effectively forcing Google to choose: honour site owners’ preferences or risk losing access to swathes of web content.
Granular control: The framework allows publishers to specify whether their content can be used for search, AI inputs, or AI training. Crucially, it empowers them to block AI scrapers without hindering traditional search indexing—preserving referral traffic vital to online revenue.
Industry implications: Though Google claims its AI features continue to drive high-quality traffic, Cloudflare’s challenge may prompt regulatory scrutiny or further industry-wide changes. In contrast, OpenAI has been commended by Cloudflare for separating bots and respecting content boundaries.
Sponsored byRadix
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byCSC