---

blog: Don Marti

---

People's personal data: take it or ask for it?

09 March 2018

We know that advertising on the web has reached a low point of fraud, security risks, and lack of brand safety. And it's not making much money for publishers anyway. So a lot of people are talking about how to fix it, by building a new user data sharing system, in which individuals are in control of which data they choose to reveal to which companies.

Unlike today's surveillance marketing, people wouldn't be targeted for advertising based on data that someone figures out about them and that they might not choose to share.

A big win here will be that the new system would tend to lower the ROI on creepy marketing investments that have harmful side effects such as identity theft and facilitation of state-sponsored misinformation, and increase the ROI for funding ad-supported sites that people trust and choose to share personal information with.

A user-permissioned data sharing system is an excellent goal with the potential to help clean up a lot of the Internet's problems. But I have to be realistic about it. Adam Smith once wrote,

The pride of man makes him love to domineer, and nothing mortifies him so much as to be obliged to condescend to persuade his inferiors.

So the big question is still:

Why would buyers of user data choose to deal with users (or publishers who hold data with the user's permission) when they can just take the data from users, using existing surveillance marketing firms?

Some possible answers.

  • GDPR? Unfortunately, regulatory capture is still a thing even in Europe. Sometimes I wish that American privacy nerds would quit pretending that Europe is ruled by Galadriel or something.

  • brand safety problems? Maybe a little around the edges when a particularly bad video gets super viral. But platforms and adtech can easily hide brand-unsafe "dark" material from marketers, who can even spend time on Youtube and Facebook without ever developing a clue about how brand-unsafe they are for regular people. Even as news-gatherers get better at finding the worst stuff, platforms will always make hiding brand-unsafe content a high priority.

  • fraud concerns? Now we're getting somewhere. Fraud hackers are good at making realistic user data. Even "people-based" platforms mysteriously have more users in desirable geography/demography combinations than are actually there according to the census data. So, where can user-permissioned data be a fraud solution?

  • signaling? The brand equity math must be out there somewhere, but it's nowhere near as widely known as the direct response math that backs up the creepy stuff. Maybe some researcher at one of the big brand advertisers developed the math internally in the 1980s but it got shredded when the person retired. Big possible future win for the right behavioral economist at the right agency, but not in the short term.

  • improvements in client-side privacy? Another good one. Email spam filtering went from obscure nerdery to mainstream checklist feature quickly—because email services competed on it. Right now the web browser is a generic product, and browser makers need to differentiate. One promising angle is for the browser to help build a feeling of safety in the user by reducing user-perceived creepiness, and the browser's need to compete on this is aligned with the interests of trustworthy sites and with user-permissioned data sharing.

(And what's all this "we" stuff, anyway? Post-creepy advertising is an opportunity for individual publishers and brands to get out ahead, not a collective action problem.)