Google explores a new way for Chrome to label AI-generated content on web pages
Google is exploring a Chrome feature that could label which parts of web pages are human-written or AI-generated, aiming to improve transparency online.
Google is considering a new approach that could make it clearer which parts of a web page are written by humans and which are generated by artificial intelligence. The idea is to reduce confusion for readers as AI-generated content becomes increasingly common on the internet.
Table Of Content
The proposal has appeared on a Chrome Platform Status page, signalling that Google is at least experimenting with the concept within its Chrome browser ecosystem. While the feature is still at an early stage, it reflects growing concern about transparency and trust in online information.
As generative AI tools become more advanced, it is often difficult for users to tell whether content has been written by a human expert, supported by AI tools, or generated entirely by machines. Google’s latest exploration suggests the company is looking for ways to address that challenge without relying solely on automated detection.
A proposed attribute for AI content disclosure
At the centre of the proposal is what Google calls an “AI content disclosure attribute”. This would be a new HTML feature that allows website authors to declare how AI was used to create specific parts of a page. Rather than labelling an entire page, the approach focuses on individual elements within it.
According to documentation linked from Chrome Platform Status, the idea is to support what Google describes as elemental AI disclosure. This would allow different sections of a page to be marked as human-written, AI-assisted, AI-generated, or autonomously generated. In theory, this could give readers a much more precise understanding of how content was produced.
The proposal is explained in more detail in a public GitHub post that introduces an “ai-disclosure” HTML attribute and a related tag. These markers would be embedded directly into the page’s code, signalling the level of AI involvement for each labelled section. Browsers such as Chrome could then interpret this information and present it to users visually.
How Chrome could show AI involvement to users
If the feature were implemented, browsers would be able to recognise when content has been marked as AI-generated or AI-assisted and flag it for users. While the exact visual treatment has not been finalised, it could involve indicators or highlights that show the origin of different parts of a page.
This approach is designed to address a growing problem on the web. With the rapid rise of generative AI tools, content that looks polished and authoritative can be produced at scale, making it increasingly hard for readers to judge its source. Even experienced users may struggle to distinguish between human-written material and AI-generated text.
Importantly, the proposal does not attempt to solve the problem through automated AI detection. Instead, it relies on authors to be open about their use of AI. This means the system would depend heavily on publishers’ honesty and good practice. Google has acknowledged this limitation, but sees value in creating a standard mechanism for disclosure rather than enforcing detection.
Prospects for adoption across the web
At present, the AI content disclosure attribute has only been filed with ChromeStatus, indicating interest rather than commitment. Features listed there can take months or even years to progress, and many never make it into a stable release. Google has not confirmed a timeline or whether the idea will be developed into a full Chrome standard.
If Chrome were to adopt the feature, it could have wider implications for the web. As the most widely used browser, Chrome often sets patterns that other browsers follow. Should the approach prove helpful, it could later be taken up by browsers such as Firefox or Safari, where concerns about AI-generated content are equally relevant.
The proposal highlights a broader debate about transparency online. While an attribute like “AI content disclosure attribute” would not prevent misuse or deception on its own, it could provide a framework for clearer communication between publishers and readers. Whether the industry embraces such a system may depend on how easy it is to implement and how much value users place on knowing how content was created.





