---

blog: Don Marti

---

Open practices and tracking protection

19 October 2017

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Browsers are going to have to change tracking protection defaults, just because the settings that help acquire and retain users are different from the current defaults that leave users fully trackable all the time. (Tracking protection is also an opportunity for open web players to differentiate themselves from mobile tracking devices.)

Before switching defaults, there are a bunch of opportunities to do collaboration and data collection in order to make the right choices and increase user satisfaction and trust (and retention). Interestingly enough, these tend to give an advantage to any browser that can attract a diverse, opinionated, values-driven user base.

So, as a followup on applying proposed principles for content blocking, some ways that a browser can prepare to make a move on tracking protection.

  • Build APIs that WebExtensions developers can use to change privacy-related behaviors. (WebExtension API for improved tracking protection, API for managing tracking protection, Implement browser.privacy.trackingProtection API). Use developer relations with the privacy tools scene.

  • Do innovation challenges and crowdsourcing for tracking protection tools. Use the results to expand the available APIs and built-in options.

  • Develop a variety of tracking protection methods, and ship them in a turned-off state so that motivated users can find the configuration and experiment with them, and to enable user research. Borrow approaches from other browsers (such as Apple Safari) where possible, and test them.

  • For example: avoid blocklist politics, and increase surveillance marketing uncertainty, by building Privacy-Badger-like tracker detection. Enable tracking protection without the policy implications of a top-down list. This is an opportunity for a crowdsourcing challenge: design better algorithms to detect trackers, and block them or scramble state.

  • Ship alternate experimental builds of the browser, with privacy settings turned on and/or add-ons pre-installed.

  • Communicate a lot about capabilities, values, and research. Spend time discussing what the browser can do if needed, and discussing the results of research on how users prefer to share their personal info.

  • Only communicate a little about future defaults. When asked about specifics, just say, "we'll let the user data help us make that decision." (Do spam filters share their filtering rules with spammers? Do search engines give their algorithms to SEO consultants?)

  • Build functionality to "learn" from the user's activity and suggest specific settings that differ from the defaults (in either direction). For example, suggest more protective settings to users who have shown an interest in privacy—especially users who have installed any add-on whose maintainers misrepresent it as a privacy tool.

  • Do research to help legit publishers and marketers learn more about adfraud and how it is enabled by the same kinds of cross-site tracking that users dislike. As marketers better understand the risk levels of different approaches to web advertising, make it a better choice to rely less on highly intrusive tracking and more on reputation-driven placements.

  • Provide documentation and tutorials to help web developers develop and test sites that will work in the presence of a variety of privacy settings. "Does it pass Privacy Badger" is a good start, but more QA tools are needed.

If you do it right, you can force up the risks of future surveillance marketing just by increasing the uncertainty of future user trackability, and drive more marketing investment away from creepy projects and toward pro-web, reputation-driven projects.