---

blog: Don Marti

---

My talk, with links, from WebInnovationX

29 November 2020

(This is a cleaned up and edited transcript of my talk from WebInnovationX, with some links put in.)

The IT business has long been in a kind of cycle of centralization and decentralization. After the 8-bit microcomputer days, the IBM PC led into the Microsoft Windows era, then the explosion of companies in the dot-com boom, and the cycle keeps turning. Today, the death of the third party cookieI promised a link to a cookie recipe, so here's one that works reliably for me: Very Peanut Butter Cookies is often seen as a problem of further centralization. If third party cookies go away and we don't have the ability for multiple players on the web to see data from each other, then that leaves a few big companies running everything. The Lumascape would look better with just a few logos on it, and some consolidation is inevitable. But there's a big difference though between some overdue consolidation in the web advertising business and this scary move toward extreme centralization.

Today there are a few extremely large companies that are capable of participating in the web standards process in a large, ostentatious fashion. They have the ability to produce complete implementations of complex proposals, demonstrating their power to drive consolidation of of the web business. Big companies have open-plan offices full of developers for the same reason that King Henry VIII ordered rows of ornamental yew trees planted at Hampton Court Palace: to show that they have enough wealth not to need all that land and labor for food, and to remind visitors of the power of the English longbow. Some consolidation is probably a good thing, but for large companies the process of eliminating the third-party cookie is more about commodifying complementary goods, a strategy that Joel Spolsky pointed out back in 2002. In the days of Linux first catching on as a web server platform, the new OS was used for commodifying the servers. New companies didn't need big Digital or Sun servers like the first generation of big web properties, they could just use stacks of generic PCs.

This innovation led to the giant companies of today, systematically commodifying everything that they touch. A strategy that worked so well for commodifying the hardware business is now being applied to everything, including content and labor. A lot of of attention is paid to big data as a buzzword, or artificial intelligence as a buzzword, but all of these terms are encapsulating a common phenomenon: pushing all the value in the system to a centralized reputation graph, a data structure that allows one company to evaluate which other participants in the system are better or worse for specific purposes. This is a common pattern across the gig economy, across large social sites, and of course the content industry.

Here is a question that that we got before this talk. If personal data becomes available mostly through a few giant companies, and those companies are the only people who have access to these large reputation graphs, then won't all advertising budgets just move to these large companies?

The answer is that yes, content is participating in a race to the bottom, but brands are also subject to commodification. If commodification continues, then brand marketing budget decisions won't matter anyway. Here's a simple example.

Facebook ad info for BM 00704

This is a large company's user dashboard for checking on who is using my personal information. "BM 00704" is directly competing with established brands to sell me branded goods and services, and of course I never gave them my personal info. The big company, in this case Facebook, cooperated in whatever tricks got pulled to get my information from somewhere. In most direct marketing media, like direct mail, a vendor can "seed" the mailing list with records that will get back to them to let them know if the list gets copied without permission. Facebook Custom Audiences give the list owner no way to detect seed records, which makes Facebook an easy way to use a stolen customer list without detection. Brands can't expect any protection from the commodification effect that publishers are seeing. There's no reason to expect that first, servers will be commidified, then they'll come for the publishers, and somehow they're just going to decide to stop before they get to, say, Oreo cookies. If you want to tell the commoditization dystopia story, don't stop at publishers. Tell the whole story, including the brand part.

On the internet we love our dystopia stories. We've had the cryptography key escrow dystopia story, the digital rights management dystopia story, and today of course we're having the surveillance marketing consolidation dystopia story. The crypto issue and the DRM issue are both still problems on the Internet, but they haven't led to the end of civilization as we know it because there's been a set of factors pushing back. Same goes for surveillance marketing. Here, a lot of the anti-dystopia narrative comes from privacy law and tools. As you've probably heard, in the California election that just happened, the California Privacy Rights and Enforcement Act, or Proposition 24, passed with 56% of the vote. That 56% turns out to be way below what CPRA had originally polled at before the election, where it came in at 88 or 72 percent.

At first, it looks like the surveillance marketing business managed to make their case to some California voters. But the main point against proposition 24 in the information that actually went out to voters is that it doesn't do enough:

Vote NO on Proposition 24 because it was written behind closed doors with input from giant tech corporations that collect and misuse our personal information—while the measure's sponsor rejected almost every suggestion from 11 privacy and consumer rights groups....The real winners with Proposition 24 are the biggest social media platforms, giant tech companies and credit reporting corporations who get more freedom to invade the privacy of workers and consumers, and to continue sharing your credit data. Here's what they won't tell you about the 52 pages of fine print: Proposition 24 asks you to approve an Internet "pay for privacy" scheme. Those who don't pay more could get inferior service—bad connections, slower downloads and more pop up ads. It's an electronic version of freeway express lanes for the wealthy and traffic jams for everyone else.

If you look at that 56 percent from the point of view of one of the big surveillance marketing companies, then yes, only 56 percent voted to have you walk the plank, but a substantial fraction of the of the other 44 voted for having you keelhauled first. For another reality check on how far away from people's norms the direct marketing business has managed to get, take a look at this post from the infamous Unethical Life Pro Tips board on Reddit, where monetizing a list of PII was too unethical even for people who choose to moderate a forum about unethical activity.

ULPT Request: How to make money from an email/snail mail list
of contacts? (This post has been removed by the moderators)

California already has one privacy law, the California Consumer Privacy Act (CCPA). The Interactive Advertising Bureau recently surveyed corporate privacy lawyers and they found that only about 1-5% of people who are given the opportunity to exercise their privacy rights under the CCPA actually do so. And 60% of the lawyers surveyed said that their companies just go ahead and make CCPA rights available to everyone. You do have to put an asterisk on that number, because that's 60% of businesses that are big enough and engaged enough to send a privacy lawyer to meet with the IAB, so the actual percentage is probably lower, but it's clear that CCPA is having an influence on privacy features that are being made available even outside of California.

I am one of that 1-5% and i've been sending out CCPA opt-outs since January. The process is really kind of a pain still.

Result of a CCPA Right to Know

The percentage of people who vote for privacy stuff is high, but the percentage of people who actually take the time to do the privacy activities enabled by the law is a lot lower. The only thing less popular than creepy nerds is making yourself do creepy nerd stuff. The decentralizing effect comes from how not only will California voters just keep passing privacy initiatives until this stuff stops, but in how the laws are increasingly making it possible for organizations to take some of these actions on behalf of consumers.

I worked on an Authorized Agent study at Consumer Reports, and it had a response rate that any direct marketer would envy. We had a whole process for recruiting emails and multiple lists and following up, and we didn't have to use any of it because the first email to the first list filled up the entire study group. It's incredibly popular with consumers to be able to say, let me push one button and make this stuff stop. Watch for more info on results of the study.

Authorized Agent services are not some kind of silver bullet for putting end users completely in charge of their personal data. There's a trend among activists to say that we're headed for some kind of privacy or data control utopia, in which people are going to have a high level of control, and that's not realistic. The data isn't worth enough and people don't care enough. But privacy services are a counterweight to centralization trends. As a company gets big enough and high enough in impact to drive meaningful centralization, it's also going to be a high-value target for privacy laws and protection services to balance that out.

This is good news for advertising. Somehow the print publishing business managed to come up with a high margin, repeatable, sustainable advertising model for newspapers and magazines when they were breathing the fumes from molten lead all day, and we have not been able to get anywhere near that on the Internet. So we have a huge opportunity now to redesign the market for web advertising in such a way that it is acceptable to the audience, not forcing the kind of of consolidation that tends to get pushed back, while at the same time producing the kind of reputation effects in the audience's head that make advertising worth buying. If the entire reputation graph lives inside a big company then there can be no brand equity, there can only be bits within someone's centralized score, and that's missing out on a lot of economic value. So I'm highly encouraged to be participating in the Improving Web Advertising Business Group and I'm really looking forward to seeing what kind of models we can come up with. Thank you very much.

Do you have any comments about the pros and cons of closed and open ecosystems? Some of the questions claim that a closed ecosystem is faster and allows for better measurements than the smaller publishers.

Unfortunately in this election cycle we've seen a lot of the limitations of centralization. From the point of view of a large tech company there's constant pressure to lower the costs of functions such as moderation and ad review, and so you end up with moderators and ad reviewers who are doing these very stressful jobs that expose them to a lot of, say, terrorist material or child abuse material. And they're up against dedicated misinformation operations, so when people rely on those centralized information sources then they're they're getting a source of information that's been weaponized by highly motivated bad actors, whether they're financial scammers or political extremists or both. When independent publishers are involved, there's a media ethics point of view or a labor of love point of view on their content that tends to make it higher reputation and more reliable in a way that a big company whose algorithm is constantly being tested and gamed really can't do.

How do we balance the need for privacy with the need for advertisers to know that a user is legitimate?

People have a trust relationship with their web publishers and so there is very personal information that people will share with a trusted content brand that they won't share with some big bad Internet company in general. As editor of Linux Journal some of the most rewarding letters to the editor that we ever received were people who had been reading Linux Journal in prison. They didn't generally have access to an actual Linux box but then when they got out they were able to get some kind of an IT job. People share personal information with their trusted publishers, writers, and editors in a way that a big company can never get.

The way to use that information appropriately, in a way that you can say to an advertiser that yes, I have this highly engaged audience, is a very fruitful field. I would look at the Trust Tokens sessions at W3C when those come up. Real people do a lot of real people things, and real people are good at recognizing other real people. Just to give an example, the 90210 area code in the USA has more Facebook users than people. Publisher networks of trust can show if, yes, this is a real human reader. And they're much more reliable if it's a publisher that has a subscriber or someone who's who's been interacting with them for a while, then if it's just some big company saying, trust us, look at all these people we have watching videos, you can pivot to video now.

WebInnovationX link on YouTube with panel discussion from this session, plus more events

International coalition of activists launches protest against Amazon

Facebook is deleting evidence of war crimes, researchers say

The downfall of adtech means the trust economy is here

France starts collecting tax on tech giants

So, an Academic Walks Into a W3C Meeting…

Is the internet advertising economy about to implode?

Muslims reel over a prayer app that sold user data: 'A betrayal from within our own community'

Marketers are Addicted to Bad Data