Mozilla opposes this proposal because it contradicts our principles and vision for the Web.
Any browser, server, or publisher that implements common standards is automatically part of the Web:
Standards themselves aim to avoid assumptions about the underlying hardware or software that might restrict where they can be deployed. This means that no single party decides which form-factors, devices, operating systems, and browsers may access the Web. It gives people more choices, and thus more avenues to overcome personal obstacles to access. Choices in assistive technology, localization, form-factor, and price, combined with thoughtful design of the standards themselves, all permit a wildly diverse group of people to reach the same Web.
Mechanisms that attempt to restrict these choices are harmful to the openness of the Web ecosystem and are not good for users.
Additionally, the use cases listed depend on the ability to “detect non-human traffic” which as described would likely obstruct many existing uses of the Web such as assistive technologies, automatic testing, and archiving & search engine spiders. These depend on tools being able to receive content intended for humans, and then transform, test, index, and summarize that content for humans. The safeguards in the proposal (e.g., “holdback”, or randomly failing to produce an attestation) are unlikely to be effective, and are inadequate to address these concerns.
Detecting fraud and invalid traffic is a challenging problem that we’re interested in helping address. However this proposal does not explain how it will make practical progress on the listed use cases, and there are clear downsides to adopting it.
You know Mozilla’s statement is actually pretty prescient. I haven’t seen much discussion about this that didn’t center around AdBlock or DRM or whatnot. But yeah, web development as a software discipline would be harmed by WEI too.
which as described would likely obstruct many existing uses of the Web such as assistive technologies, automatic testing, and archiving & search engine spiders. These depend on tools being able to receive content intended for humans, and then transform, test, index, and summarize that content for humans.
Like imagine if Google locked Inspect Element behind the site you’re going to requiring the Human signature… Or the opposite!
God I wish Google would reinstate their ‘Don’t Be Evil’ slogan.
It’s still part of their code of conduct, and Alphabet (which owns Google) has “Do the right thing” as their motto. Google did evil shit even when “don’t be evil” was their motto, and Alphabet continues to do evil shit today despite their company motto.
Turns out corporate mottos are absolutely meaningless when there’s profit to be had.
I’ve said it before: Google is the biggest bait-and-switch in internet history