Some surveillance marketing organizations have
suggested adopting a Federal privacy law in order to
preempt the California Consumer Privacy Act.
Preemption would be bad if it actually happened, but
the fact that they're trying for it is the best
endorsement I have ever seen for the California
Consumer Privacy Act. If I wasn't a CCPA fan before,
I am now.
In my humble opinion, preemption is the wrong
direction. Privacy regulation should be complicated
enough to impose significant transaction costs on
database marketing practices. State-level privacy
regulations are a start, but what about county or
city ordinances? User tracking allowed on alternate
sides of the street on different days of the week?
Why would I want to see costs and complexity imposed
on the surveillance marketers? I'm going to leave the
political stuff out for now. From a selfish point
of view, as an individual considering buying stuff,
I am going to get ads, and I'm going to get them matched to me in
Context. Placed on a resource I'm interested in using, like a magazine article or a bus bench.
Search. Matched to search results when I look for a product or a service, like a Yellow Pages ad or a Google search or Maps ad.
Personalized targeting. Matched to me based on something the advertiser knows about me.
On the Internet, many ads are
placed using a mix of these techniques,
and it's hard to split out how a real-world
marketing budget is allocated across them. And
information originally collected based on context can
and start getting used for personalization. But the technical and
regulatory environment affects how much money advertisers
choose to invest in each one.
As the recipient, or potential customer,
the three ad placement methods affect me in
different ways. Ad money allocated to context is
a subsidy for something I want to use, whether
it's local news coverage or an ad-supported
Ad money allocated to search is almost as good. I'll
use a search engine more if it gives helpful results,
so search advertising also pays for something I want.
Personalized targeting, though, is a problem.
Instead of paying to support something I want, the
advertiser is paying to reach me as an individual.
The fact that my information is in somebody's
database is a risk to me, but a source of revenue
for the database owner. It's a classic Negative
problem. Besides, anything spent on this stuff does
not go to pay for the ad-supported resources and
search services I really want.
Ad-supported cultural works have positive
externalities, when they're re-purposed for other
uses. The "Star Trek" advertisers got their money's
worth in 1966-1969, but people are still watching
the show today. Kurt Vonnegut quit his job as a
car dealership manager because he sold stories to
As a member of the audience for advertising, I win
when I can help move the marginal advertising dollar
from personalized targeting to either context or
search, because a fraction of the money that gets
moved pays for something I want, some of it is likely
to create positive externalities, and none of it gets
spent on creating risks for me. Regulation is a piece
of the solution, and a mess of confusing regulations
could be more effective in raising the relative price of
personalized targeting than a single set would.
People's intuitions about marketing practices
are economically sophisticated.
People often choose to pay attention to ads that carry economic signal.
People are quick to develop banner blindness and other habits to avoid low-signal advertising.
People choose not to invest a lot of time in low-effectiveness ways to protect their personal information, but pick up on measures seen as effective, such as Do Not Call.
People who grow up in ad-heavy economies learn the
economics of advertising like people who grew up
playing ball learn physics.
What we need to see from privacy regulation is
increase the transaction costs of negative-externality advertising practices.
credible promise of reducing risks, to atract mass participation.
Privacy regulation has to have the confusion and
cost from the advertiser side increased, in order
to balance out the risks and costs imposed on the
audience side, and shift ad budgets.
Four Steps Facebook
Should Take to Counter Police Sock
New law of headlines: any piece with Facebook
should in the headline is not worth reading. Come
on, EFF, the police put undercover officers into
schools and workplaces all the time. Why should a
mass-market social platform be any different? This
is just the high school weed dealer's if you ask
them if they're a cop they have to say yes.
Daphne: Moderating Facebook At Barely Minimum
Wage then: hardware is difficult, manly work, and software is a straightforward office task we can hire low-paid women for. now: software is difficult, manly work, and content moderation is a straightforward office task we can hire low-paid women for.
(update 21 Apr 2019: copy edit, add some explanation.)
I'm learning how to make a Progressive Web
Progressive Web Applications are a good
thing because they give people a lot of
the features of mobile apps, but run in
the browser where it's easy to turn on privacy
Here's how it's going so far. A simple polyhedral
dice roller for Dungeons and Dragons, and similar
games that use many-sided dice.
(Yes, I know real dice are better for a real
game. This is for when you forget your dice but
not your phone, or have a few minutes to prepare
something.) This is mainly designed to run on a
phone, but it does take keyboard input, and if you
can see it here it also works in an iframe.
Here's how I got the help to work the way it does,
with CSS. In its regular place, the keypad is laid
out calculator-style, and on the help page, the
buttons are laid out in a column on the left with
the explanation of what they do next to them.
The keypad and the help page are really the same
content, so each button's help is a p element
that lives right next to the button element.
Turning help on and off doesn't navigate you to a
separate help page, it just moves the keypad to a
new parent element where it is styled differently.
The #keypad div starts off as a child of the #compact
div. Inside of #compact,
the grid is four columns: grid-template-columns: 1fr 1fr 1fr 1fr;
the help text is styled with display: none
the tall button is grid-area: span 2 / span 1 and the wide button is styled with grid-area: span 1 / span 2;
the 0-9 button, only used in help, is also display: none
Moving #keypad to be a child element of #help means that different styling applies.
the grid is two columns: grid-template-columns: 1fr 4fr;
All the help elements are display: block so they show up, and take up positions in the grid.
The tall and wide buttons are single sized.
The individual numbers are display: none and the 0-9 button is display: block.
Putting the help text next to the button it
applies to should make it easier for me to check that
the help text for each button is there and up to date,
and I don't have to make a separate help page.
Next step: figure out how to make this Do The Right
Thing with a screen reader.
(Update 17 Apr 2019: Yes, I know it works
on Firefox but is messed up on some other
browsers. I made an issue: Issue #29609 | webcompat.com)
I'm making a web thing (for Progressive Web
Application practice) and could use a header image.
I'll just go old school and do some ASCII Art.
Wait a minute, though. All the cool web sites
now are Responsive. So the header should work at
different sizes. So what I want to do is to get
ASCII Art to behave like a regular image. If I make
Ye Olde .Sig Sword
and I want it to look good inside the containing element, I want the text to resize, not reflow.
Kind of like this.
Hi, here we are inside a narrow element. Here is a little tiny sword.
The dashed red border is just to show how big the div is.
This div is wide. Behold my large sword!
The answer so far: put the ASCII Art inside an svg element, like this.
The "white-space: pre" gives me the ultimate image
editing environment: free-form multi-line ASCII art
text within the text element. Yes, I still need
to use >, <, and &. The fill
sets the color.
One small annoyance is that the text of the ASCII
Art can be selected if the user double-clicks, or
drags, or long presses on a touchscreen. So the
user-select stuff is to prevent that from happening.
A. G. Sulzberger, publisher of The New York Times, writes,
If you’re reading this essay on an internet browser, it offers a useful example of what tracking looks like at a practical level. Before you had time to read a single word, a number of different companies had already placed a “cookie” or other tracking mechanism on your browser to study your internet use. The Times hosts these trackers for three purposes: to learn about how people use our website and apps so we can improve their experience; to reach readers we hope will subscribe; and to sell targeted advertising.
Read the whole thing. But my inner tech editor could
not be silenced, and had a small suggestion. How about...
If you’re reading this essay on an internet browser,
it offers a useful example of what tracking looks
like at a practical level. Before you had time to
read a single word, your web browser had already accepted
a “cookie” or other tracking mechanism from a
number of different companies
to study your internet
use. The Times hosts these trackers for five
purposes: to learn about how people use our website
and apps so we can improve their experience; to reach
readers we hope will subscribe; to sell targeted
advertising; to leak our readers' personal information to help
our competitors sell ads targeting our audience; and to enable
fraudulent bot traffic to impersonate human visitors.
As soon as I make the web browser, and not the
tracking company, into the subject of the sentence,
it helps explain some of the business reasons
for news sites to focus on privacy. For a site,
examining your own privacy practices is fine, but
it's not where the big wins are. The important part,
for the New York Times and other sites
that need to protect their ad revenue, is to work along
with in-browser tracking protection technology.
Protecting reader data for the readers is mostly the
same as protecting audience data for the ad business.
It's kind of like the situation with email.
Email is a viable marketing medium today
not just because legit email marketers don't
but because email users have good spam filters.
Spam filter technology kept low-value email lists
from devaluing email marketing. In-browser privacy
technology is starting to reverse the process by
which low-value cross-site tracking has been devaluing
The Times is
already doing some good service journalism on web
Next step: set up the paywall to
give extra free articles per month for
anyone running Apple Safari ITP Apple Safari
The more reader eyeballs a a site can remove from the
race-to-the-bottom eyeball market, the more market
power it has.
Spam filters and legit email marketers saved email as
a marketing tool. Can privacy-protecting browsers and
legit ad-supported sites do the same for the web?
Step 1: Adopt a GDPR Everywhere policy.
This is obviously good. Show me a company in the IT
business that hasn't decided to go GDPR Everywhere,
and I'll show you a company that hasn't finished
writing out all the user stories for how to handle
it when some users or partners are covered by GDPR
and others aren't. Or what happens when you have
been giving a user the creepy second-class privacy
policy for a while and then they go get married
to a European, or go work for a European company,
or something. Basically every IT company is going
to either go GDPR Everywhere or sign up for years
of intricate, expensive legal work and arguments that
they'll eventually give up on.
Step 2: Have products and services interact
with open source, and collaborate and test upstream.
This is also obviously good. Pull open-source Git
repositories and run integration testing and metrics
and whatever on them. We shouldn't just sit there and pull
whatever comes out at the end of the development
process, help with the QA, publish peer-reviewed
Oh, no, PII! Does that mean we can't work with open source?
Of course not. Open source is
still legal. But we have to comply with
our data subject rights obligations under Article
We have to contact everyone whose PII we hold,
and notify them clearly of what we're doing with
And what are we doing with it? We're using it to
do open source QA that feeds into making our product
better. And we have to explain what we're doing in
our Article 14 notification. So the European Union
basically just told us not just that we can send
our elevator pitch to a bunch of software developers
unsolicited, but that we have to.
Someone once remarked (paraphrased) that as
long as there has been a scene, there have been
people complaining that it is no longer the true
scene. (citation needed)
Of course the open source scene is changing, but how
much of that is the unavoidable transformation that a
healthy scene goes through, and how much is fundamental?
The Free Software movement as we know it started
by capturing the tremendous cognitive surplus that
was just there for the taking at universities and
conventional, slothful corporations. Back in the
1980s and early 1990s, barriers to cooperation
were transactional: licensing and communications
technologies. Patches on a mailing list seem like
a high-overhead collaboration method today, but by
the standards of the time, diff(1), patch(1)
and Free command-line tools were transformational.
And of course the classic free software licenses
are practically zero-overhead for participants with
uncomplicated sharing or reciprocity goals.
So, all that cognitive surplus was just sitting there
between classes or TPS Reports or whatever, and the
software freedom scene was set up to capture it.
Before long, Tim O'Reilly and friends branded it
as a software business trend called Open Source,
and the modern software business emerged.
Sounds great—why isn't it continuing to work
like that? Two reasons.
Less cognitive surplus in the world
The kind of university experiences that include substantial cognitive surpluses are not widely available.
The work environment is better at capturing cognitive surplus.
There's a whole "privilege" thread here, but
the main point is that a lot of people who had a
lot of free time got on the Internet and had the
opportunity to participate in open source and other
cognitive-surplus-capturing activities (such as
MMORGs) a long time ago. New people joining are not
coming in with the same economic and time advantages,
even if they have access to the same creative and
More competition to capture available cognitive surplus
Open source is no longer the only practical,
low-overhead way to do collaborative projects.
Now people can do
native app stores
software as a service
It's no longer a choice between low-overhead,
low-incentivization (open source) or accepting high
overhead if you want to get paid.
Open source advantages in transaction costs are still
there, so there's no reason for open source to go
away entirely, but people looking for open source
contributors do have to realize that we're going to
have to keep increasing the number of people who
consider open source as a possible valuable use
of their time (remuneration issues are blockers)
or be stuck competing with more outlets for less
already-unmonetized productive capacity.
A designer knows he has achieved
perfection not when there is nothing left to add,
but when there is nothing left to take away.
The perfect targeted advertising business model has
been discovered. I have seen it. Maybe you have
seen it too.
Identify users likely to be enraged by a particular issue.
Run rage-provoking targeted ads on that issue, with a call to action to sign a petition or complete a survey.
Upsell a fraction of the people to make a donation, automatically billed to their credit cards monthly.
Actually deliver the petitions or surveys or
whatever, but keep most of the money for yourself
by paying it to your data-driven marketing
It's data-driven. It's sticky. It's social. It's got
everything that a surveillance marketing business
needs, and nothing it doesn't. No manufacturing. No
support. Not even any drop-shipping.
If you don't get these ads, yay for you. Your
eyeballs are probably too expensive on the ad
impressions market, or you don't seem like the kind
of person who would get enraged about any issue that
they have a landing page for.
The funny thing is that the model works because
people made so much noise about the Citizens
United decision and shadowy political groups.
So the people who are paying into these things will
probably never even feel ripped off.
(Update 23 Apr 2019: I wrote to ask this dealership how they got my info and didn't get an answer to my question.
But they did put me on their email list in time to get the "April Shower of Savings" email so I've got that going
for me which is nice.)
One place I will probably not buy a car: Franklin
Sussex Auto Mall, in New Jersey.
I still have a Facebook account, I
don't check Facebook often enough for it to be a good
way to reach me. See the page footer here for contact
info. mostly to keep up on the ad scene there.
When I checked Facebook's page of ads targeting me,
this company is listed under advertisers "Who use a
contact list added to Facebook."
Somehow, Franklin Sussex Auto Mall got a hold of
my email address or phone number, and uploaded it
to Facebook. Have I ever shopped for a car in New
Jersey? No. If I was shopping for a car, would I fly
to New Jersey to buy it and then drive it home? No.
And now that I look at it, when I go through the
advertisers that Facebook lists as having uploaded
my info, most of them are car dealers I have never
visited or contacted. Someone has a pretty good
racket going here. How much are they making from
the car dealers? (Yes, this is a bad thing, because
car dealers could be spending that money to build
positive reputation by funding local news, or other
ad-supported resources with positive externalities,
but we knew that already.)
Maybe when they write the history of the big
social site era, it won't be about some all-seeing
panopticon, but more about a bunch of people in a
highly paid California bubble, mostly young guys
who have been told they're smart their whole lives,
getting out-hustled at a direct marketing business
they don't really care about.
Interesting problem: why do brands fail to protect
customer data when it would be in their interest to
If expected customer retention of tracking-protected
customers is higher, why not invest in tracking
protection for your most profitable customers?
Why don't car insurance companies figure the odds
on customer retention of protected and unprotected
customers the same way they figure the odds on
It might be because corporations are not
decision-making entities, and online marketing is
the world's longest chain of principal-agent problems.
The value of a database marketer as an individual
on the job market is a function of the number of
database-capturable prospects that the marketer
will help an employer land as customers. If a lot
of Allstate customers are also available on DMPs,
then more VC-funded insurance startups will launch,
and they'll bid up the salaries of database marketers
now working at Allstate.
If Allstate's best customers are protected, then the
VCs invest in something else, the job interviews don't
happen, and Allstate can keep paying their database
marketers what they're paying.
So: principal-agent problems are market design
opportunities. How to structure compensation for
marketers to incentivize customer retention even
after that marketer is no longer employed by the
brand? (People generally want to do the job right,
you just can't keep throwing incentives to do it
wrong at them.)
And how to increase the social rewards of the
choice to allocate marketing budgets towards
positive-externality advertising and away from
negative-externality advertising? Imagine that
a restaurant chain is opening a new location and
wants to reach people there. They have two choices.
Buy ads that pay for local news and cultural content
that is written for people in that area, or they can
buy ads that pay to make those people more depressed,
manipulate their elections, and try to taunt them
into massacring each other. The social rewards for
choosing the first should tend to go higher.
White power assholes are not exactly the smartest
people on the Internet. State-sponsored manipulation
operations have better skills and can use the domestic
guys as human sock puppets.
Pedophiles aren't the smartest people on the Internet
either. Even the "highly technical dark web" pedo
networks are using off-the-shelf tricks that are far
behind what even the most basic adfraud operation
can pull off.
So I'm writing to explain why I'm going to
move your long-form think piece about the
"power" of the Internet "duopoly" to the
probably-never-going-to-get-to-it end of my to-do
list. Let's have a look at just the last 24 hours:
(Come back in 24 hours for more.) You're asking me
to be interested in reading your ever-so-thoughtful
essay about the awesome power of two companies that
have to be like the fifth most influential people on
the Internet, max. Even the Facebook ad integrity
guy is down to asking for free reports of scam ads,
on Twitter.Dude, if the "get
everybody else to do your QA for you" strategy worked,
then we'd all be running desktop Linux.
The "powerful platforms" are a box on the Internet
cable between terrible marketing decisions on one end
and criminals and terrorists on the other. A box
maintained by vaguely creepy but not especially
interesting IT staff. Yes, let's write about CMOs
who attach their brands to heinous shit—what's
up with that? Yes, let's write about the criminals
who end up with the money—that can't be good.
But the companies in the middle are not the story.
Please let me know if the following makes any sense,
and if so I'll turn it into a talk.
It's not a simple game of people vs. companies. In
software, you don't just have evil "software hoarders"
vs. cooperation-minded "users". There are way more
players: OS vendors, hardware vendors, proprietary
ISVs, developers of internal applications, and IT
organizations. At least. I'm sure I forgot some.
But the point is that they don't all have the
same interests. Pretty much everyone who does
software wants everybody else's software to be
open source. So if you look at everybody's Core
vs. Context, people will generally play nice in
open source projects doing whatever their Context (or
if you want to look at it that way) is.
In user data, you've got the Five Armies: content
creators and their publishers, companies trying
to sell stuff (advertisers/sponsors/signalers),
intermediaries (adtech/platforms), client-side
developers (browsers/privacy tools), and fraud
hackers. A high-reputation brand with a solid mailing
list has completely different user data handling
interests from a social platform—just like a
network chipset manufacturer will have different open
source interests from a proprietary OS vendor.
True believers aren't enough to build on. Some
people are really fired up about Internet ethical
and policy concerns, but most people would somewhat
prefer the right thing, and telling them that you do
the right thing makes them feel better about doing
it and somewhat more likely to do it. But doing the
evil thing is not a deal-breaker.
Loud complaints don't matter (much). Yes, the first
open source release will include a license mismatch,
or somebody's ssh private key, or it won't build
without a tool you didn't include, or something.
And somebody will complain. But the true believers
are useful for QA to guide incremental improvement,
not as gatekeepers to decide if you're in or out.
(And if you fix something that someone is complaining
about in a particularly annoying way, do it quietly.
Eventually they'll make their complaint to a reporter
who will check it out, find the fixed version, and
start ignoring them.)
Hardly any company will get to 100%. Robert Penn
Warren said it best.
Man is conceived in sin and born in corruption and he passeth from the stink of the didie to the stench of the shroud. There is always something.
Even companies that focus on open source have awkward
where they can't Do The Right Thing, because reasons.
And most of the code contributed to open source
projects is done on the clock at companies that are
also in the proprietary software business.
I am a well known hit man on deep web.
Someone paid me 1000 USD to beat you and broke your right arm. (Why? I don't know)
I will take 1000 USD more after my client sees your broken arm!
If you send 2000 USD to me, I will cancel the job, and I will give you the name of my client.
Else, I will finish my job asap!
Send the above amount on my BTC wallet (Bitcoin): 3JDLJWW5K6AsP1VBUD1Dgsxk9ydtcdMFvz
As soon as the payment is completed I will receive a notification and a new email with the client's details will follow.
You have 24 hours from now on!
Hold on a minute.
He has a reputation, but he's going to ruin it, and
burn an existing customer, in order to earn 1.5x what
he was originally going to earn from the deal?
Where did this clown learn his game theory?
Nobody would take an established brand, fail to
deliver the product or service that the brand
was originally known for, and leak their good
customer's private info, just to go chase incremental
revenue driven by unproven new technology,
Two kinds of web clients who it's a bad idea to serve
a third-party resource to:
Users who have not given consent. We know we can't use their data.
But third parties can peek at those users because their tracking script or pixel is on
the page. If the first party can't have that data why should the third parties get it?
Adfraud bots. Bots come to visit legit sites to build up realistic-looking cookies so they
can cash out elsewhere. Bad idea to help them.
Consent management requires some interaction with
the user, which is also an opportunity to collect data
for assigning a botness score.
Bots will also try to appear to be visitors who have
already given consent, and go get the third-party
resources anyway. This is an interesting problem
because it's a game where the bot and the third party
are on the same side, and the site is on the other.
Impossible for the CMP to block the bot connection
to the third party, but is it possible to show
that consent was not in place when that connection
happened? Understanding the provenance of the consent
string is going to be important. An extra cookie
containing a digital signature for the consent string?
New CMPs will have an opportunity to build on
knowledge gained from regulator reactions to
first-generation CMPs. But it's more interesting
to think about sustainable advantage for the
than just about regulatory future-proofing. For
example, a good consent management platform will also
tie in to an objection management platform/opt-out
management platform.Objection management platform and opt-out management platform both
work out to OMP—anybody using that TLA?
Attention humans. We are in a life and death struggle
with our enemies, the pathogenic bacteria. Our
scientists have developed secret weapons, the
antibiotics. It is vital to use these weapons only
when they will make a difference, in overwhelming
force, and to leave no survivors. The enemy must
be prevented from developing countermeasures. Do
Can we just betray our most effective weapons to the enemy if it's in exchange for CHEAP MEAT?
Open source program offices are a thing. What about
customer data protection offices?
A little background: when the open source business as
we know it was getting started, most of the original
concerns about free software in business
were about license compliance. Many people assumed
that all software companies would pursue maximum
restrictions using copyrights and patents, and users
who wanted to use, modify, and redistribute software
would be their adversaries.
Then, Tim O'Reilly and others started changing
the conversation to talk about open source
strategies. How can a small company release
high-impact software by building on collaboratively
developed work? Now, as open source has caught on
all over the software business, it seems obvious that
people think about
business models made possible by open practices
open source companies as market participants competing for users while cooperating on common work
But it was a big mental shift at the time.
Today, a modern open source program office has
to handle issues of license compliance, including
training developers to follow and apply licenses,
and checking the licenses of inbound software for
compatibility. But the big picture is about using
open source for sustainable advantage.
Maybe, today, we're still thinking about privacy as a
compliance problem. Users and regulators on one side,
companies on the other.
But what about a company that has a solid first-party
relationship with a customer? What if the person is
known to open the email newsletter, come in the store,
answer the surveys—you're not in an adversarial
relationship with that person over their data.
The company and the customer are on the same side.
When privacy concerns and adoption of privacy tools
help get the person protected from targeting by some
fly-by-night competitor, that's a win for both.
If you're running a bank, you don't want some
cryptocurrency scam picking off your high-value
customers. Those people's lifetime value is going
to go way down when they're selling off all their
stuff because the bank bought a "custom audience"
social campaign targeting them, and the data leaked.
If the bank had a customer data office thinking a
step ahead, instead of just checking compliance boxes,
it would have considered the data leakage risk along
with the social campaign's possible upside.
Or a healthcare brand might run what looks like a
harmless campaign, but some clever data management
platform can infer medical data from it, and a
"miracle cure" racket uses the data to pick off the
customers. Before you know it the customers stop
filling their prescriptions and start loading up
on colloidal silver or something. A customer data
office would have had the data science skills to
see the risk, and offset it, possibly by offering
the customers a free service to help them opt out of
high-risk data processing.
Even for just a regular product, when a VC-funded
"direct to consumer" competitor comes in, with no
customer list—how do they grow so fast? Buying
targeting data on the open market, because the
existing brand haven't learned to protect their
interests. Where does a brand's interest in customer
data coincide with the customers' own interest in
privacy? Instead of purely focusing on compliance,
a customer data office will understand the risks
Anyway, software freedom went from a
contentious idea to the source of much value in
a remarkably short time. What if something similar
happens with privacy?
Male impotence, substance abuse, right-wing politics, left-wing politics, sexually transmitted diseases, cancer, mental health....Intimate and highly sensitive inferences such as these are then systematically broadcast and shared with what can be thousands of third party companies, via the real-time ad auction broadcast process which powers the modern programmatic online advertising system. So essentially you’re looking at the rear-end reality of how creepy ads work.
Simply put: users need more protection from tracking....In support of this effort, today we are releasing an anti-tracking policy that outlines the tracking practices that Firefox will block by default. At a high level, this new policy will curtail tracking techniques that are used to build profiles of users’ browsing activity. In the policy, we outline the types of tracking practices that users cannot meaningfully control.
Many good points, but there's one small fix that
could make it more useful. From the original:
It is now the norm—even in the presence of laws clearly forbidding it—for nearly every commercial website we visit to plant tracking beacons in our devices, so our lives can be examined and exploited by companies and governments that extract personal data and manipulate our lives for their purposes. This offends our privacy and diminishes our agency.
Here's a suggested new version edited to be clearer about how browsers work.
It is now the norm—even in the presence of laws clearly forbidding it—for nearly every commercial website we visit to include tracking beacons in their pages, and for our browsers to load and run those beacons, so our lives can be examined and exploited by companies and governments that extract personal data and manipulate our lives for their purposes. When our own browsers work against our interests, this offends our privacy and diminishes our agency.
Please don't assign all the work to the site.
It's counterproductive to ask the site
to be the one to bear all the costs of privacy
The site is the player with the least economic power
and the least freedom to change. Web publishers and
brands failed to protect their audience and customer
data and are now, unfortunately, kind of stuck.
Because third parties control the audience information
that's needed in order to make ads saleable, no one
web site can unilaterally switch off the data flow
that makes their business model work.
On the browser side, though, it's different.
Browser developers know that they can get more user
satisfaction, and get users to spend more time in
the browser if those users have functionality that
makes them feel safer.
This stuff needs to get fixed and browsers have
the motivation and skills to do it. Let's focus on
productive next steps by the parties that can
afford to change. The result will be a new web
advertising business that works better for sites and
Sites can't take the big step to remove tracking
scripts entirely, but there are a few things that
sites can do to assist with ad reform.
Fix any "turn off your ad blocker" scripts to detect
ad blockers only, and not falsely alert on privacy tools.
If you maintain a privacy tool, offer to do a
campaign with the site. Privacy tool users are
high-quality human traffic. Free or discounted
privacy tools might work as a subscription
promotion. Where's the win-win?
Asking a site to walk away from money with no
credible alternative is probably not going to work.
Asking a site to consider next steps to get out of the
current web advertising mess? That might.
And from Thomas Baekdal on Twitter:
Interesting. But also, as I have said many times before. Imagine what would happen if the French regulators took the same look at publishers. My ongoing advice to publishers is to look at these cases as early indicators. We will be next in line to be looked at. https://t.co/7fTRottHAb
European regulators are paying attention to consent
mangement UX, and the current approach, which is
basically just click OK to make this annoying
dialog go away (and consent to use of your data by
70 companies you've never heard of), is looking
less and less likely to work.
The Framework standardises the presentation to users’ third-party data processing requests that require “informed” consent for data processing. The Framework enables “signaling” of user choice across the advertising supply chain. It is open-source, not-for-profit with consensus-based industry governance led by IAB Europe with significant support from industry parties and the IAB Tech Lab, which provides technical management of the open-source specifications and version control.
I'm a big supporter of the Transparency and Consent
Framework, if you use it right. Consent UX
is full of €50 million mistakes—but the
consent data approach of the Transparency
and Consent Framework can still be good if you put a
decent UX on it. That's what Global Consent Manager
aims to do.
Global Consent Manager applies the the same
incremental approach that social and collaboration
sites, such as LinkedIn and GitHub, use. LinkedIn
doesn't ask you to build a complete profile and work
history before you can use the site. Instead, you get
to make an account and then get prompted to add more
of your info as you use it. Global Consent Manager
borrowed that idea, in a basic form. Instead of
asking for consent for everybody to use your data
everywhere before you even read the article, with
Global Consent Manager you start off in a no
consent state. A consent string with no consent
is a valid consent string, and Global Consent Manager
will auto-generate one for you on your first visit
to a supported site.
Later, if you show that you're interested in the site,
the site can ask for more consent. This approach
gives a sustainable advantage to sites that users
choose to trust, and limits the ability of sites
whose traffic comes from deceptively obtained clicks
to run saleable ads.
Results from the user research tend to indicate
that users spend significantly more time on a
news task when they get the Global Consent Manager
experience, compared to the click OK to consent
to everything default.
The standardization work for consent data, now being
done at the Transparency and Consent Framework, really
pays off if you put a sensible (more LinkedIn-like)
UX on it.
Our next step is to extend server-side consent and
data management, with a view to facilitating the needed
data collection for publishers trusted by users to
run high-value ads, without enabling data practices
that fail to comply with regulations or with user
norms. Please let me know if you're interested in
participating or reviewing future data.
Leveraging data to make more informed ad targeting decisions is a breakthrough versus previous methods where ads were un-targeted. Personalized ads are a win for all parties. [emphasis added] It is better for:
Users (connects them to more interesting and relevant ads)
Advertisers (results in higher return on investment)
Publishers (delivers higher CPMs and increased revenue)
From the publisher point of view, behavioral
targeting creates near-infinite saleable ad
inventory on low-value and fraudulent sites, and
forces publishers to contend with those sites for ad
money. For users and publishers, the less behavioral
targeting the better. But what about for marketers?
Isn't behavioral targeting a win for them?
Personalised advertising may be one thing but getting people to respond to even micro-targeted ads is a whole other ball game. However, analysis of 3.1 million ad exposures shows that such adverts generate low click through rates (CTR). Furthermore, some of the responses to such ads are counterintuitive – with a higher CTR coming from ads mismatched to personalities and lower vs. the overall industry average for Facebook ads.
But if micro-targeting is arguably so ineffective, why do some many marketers use it? Sharp and Danenberg highlight several reasons:
Marketers often do things based on theory/logic rather than evidence. The worst myths, the longest lasting, are those that sound plausible.
Micro-targeted campaigns can boast of high ROI, largely because they are so small, reaching people who had a high likelihood of buying anyway. Marketers see the high campaign ROI from micro-targeting but fail to realise that the overall return to the company may well be lower.
There might be another reason, though. Maybe part of
the problem is that marketing science is hopelessly
mixed up with investor relations. After all, adtech
firms and ad agency groups are publicly traded
companies. And creative ad work, hiring writers
and artists and marking up what you pay them, is the
kind of business model that stock markets get bored
with. Margins are low, you can only grow as fast as
you can hire, and your assets can quit and go work
Choosing to place an ad in a quality context is more
but again, it doesn't scale. If your business
is putting good ads in good places then the
people who do good work have market power.
But psychographic models and the underlying
data sets are more investor-friendly. Even
if it takes torturing the data and putting up with
to make adtech look effective. Mathemagickal woo-woo
is scalable, more like the intangible assets of a
software company. Markets see promise of big margins
and high revenue/employee.
How much do investor-focused messages about the
effectiveness of behavioral targeting companies
interfere with marketer-focused messages about the
effectiveness of behavorial targeting in campaigns?
A lot of people have come up with the idea of a
system that lets readers of a web site pay to avoid
the advertising. This is obviously bad, wrong and
dangerous, for several reasons.
The model assumes that advertising is unredeemably awful, and walks away from
future revenue that would be made possible from fixing advertising. (So far,
Online Ads Haven't Built Brands,
but what if they could?)
The model creates incentives to make advertising worse. Ever since we started
running the auto-playing video campaign for MIRACLE ASS FUNGUS CURE, our subscriptions
are through the roof! Bonuses for all!
(a)Because the ads on news sites will keep getting worse and worse, non-subscribers will get more
and more of their news from biased sources that re-report and spin it.
(The most common sound effect on Rush Limbaugh's radio show, last I heard it, was him
flipping the pages of the New York Times as he selectively quoted from
(b) Or, because the ads keep getting shittier
and shittier, because that's the best way to incentivize people to pay to
get out of them, ad blocking keeps going up.
As soon as site owners realize that number 3 is
growing, and won't go away, they'll start lobbying
for extensive copyright expansion laws that
limit fair use, or create new exclusive rights, or apply DRM to web pages to
limit ad blocking, and, as a side effect, restrict other software that gives users control over
their web experience. Probably all three.
Freedom-hostile companies will repurpose these
laws for censorship and break the Internet.
I know that "this stupid idea will break the Internet"
posts are everywhere, but I just wrote one more.
Keeping the ads just high enough in signal, and low
enough in resource suckage and privacy/security risk
that they mostly aren't worth blocking, is just one of
the many things that has to come out somewhere close
to right in order to prevent a bunch of bad stuff.
One of the great things about Firefox is
the ability to customize with extensions.A MIG-15 can climb and turn faster
than an F-86. A MIG-15 is more heavily armed.
But in actual dogfights the F-86 won 9 out of 10
Part of that is training, but part is that
the Soviets used data to build for the average
pilot, while the USA did a bigger study of pilots'
and recognized that adjustable seats and controls
were necessary. Even in a group of pilots of average
overall size, nobody was in the average range on
all their measurements. Here is what I'm
running right now.
Awesome RSS. Get the RSS button back. Works great with RSS Preview.
blind-reviews. This is an experiment to help break your own habits of
bias when reviewing code contributions. It hides the contributor name and email when you first see the code, and you can reveal it later.
Similar to the old "Self-Destructing Cookies". Cleans
up cookies after leaving a site.
Useful but requires me to whitelist the sites where
I want to stay logged in. More time-consuming
than other privacy tools. This is a good
safety measure that helps protect me while I'm
trying out new the new privacy settings in Firefox
as my main data protection tool.
Copy as Markdown. Not quite as full-featured as the old "Copy as HTML Link" but
still a time-saver for blogging. Copy both the page title and URL, formatted as Markdown, for pasting into a blog.
Global Consent Manager, which provides an improved
consent experience for European sites. More info coming soon.
HTTPS Everywhere. This is pretty basic. Use the encrypted version of a site where available.
Link Cleaner. Get rid of crappy tracking parameters in URLs, and speed up
some navigation by skipping data collection redirects.
remembers the setting by site and defaults to
real applications are fine, but this is for
handling sites that cut and pasted a "Promote
your newsletter to people who haven't even read
your blog yet" script from some "growth hacking"
is surprisingly handy for removing domains that are
heavy on SEO but weak on actual information from
search results. (the Ministry of Central Planning
at Google is building the perfectly-measured MIG
cockpit, while extension developers make stuff
The other missing piece of the RSS experience.
The upside to the unpopularity of RSS
is that so many sites just leave the full-text
RSS feeds, that came with their CMS, turned on.
If adtech consent is so
will Online Behavioral Advertising (OBA) on the web
even be a thing once we fix the long-standing browser
bugs that allow users to be tracked without their knowledge?
After all, OBA advocates have been trying
to sell people on the benefits of being
for as long as ads have been obviously "following"
people from one site to another and raising
concerns. So why
won't regular people learn that this is all for
their own good? By now we should have a comfortable
pro-OBA user base, right? Instead, there's still
a stubborn majority against having your activity
on one site follow you to another one, and when
PERFECTLY REASONABLE WE ARE DOING THIS FOR YOUR OWN
GOOD WOULD YOU SHUT UP ABOUT PRIVACY AND JUST CONNECT
AND SHARE WITH BRANDS YOU LOVE ALREADY ad tracking
practices do make the news, it's as part of a Holy
shit, you have WHAT information on my kids, you sneaky
Where is ad reform going? What's probably going
to happen is that browsers are going to support the
OBA that users give their informed consent to, but
there's just going to be less ad inventory available
to buy that way, because only about 1/3 of browser
users approve of cross-site tracking.
Browsers have the opportunity to improve consent UX
to fill in the gap between their users' widely held
norms about how personal information is used and the
uses that people are willing to "I Agree" to or be
"OK" with in order to make a pop-up go away. Today,
if you're building a user interface, you have the
That last point is not just me nerding out over obscure economic points.
Everyone who is successful in web advertising
knows that you have to defend user
once you have it. Facebook famously closed down app access to social
data. Amazon stopped sending email receipts, to keep email services from
targeting people with ads based on their Amazon shopping habits. Google’s Ads
Data Hub restricts how advertisers can combine Google and non-Google data.
Why are web publishers—the set of
players who are most hurting here—the
exception to the defense rule? It could be
because of the technical landscape. Incumbent
tech companies have built publisher-hostile
web clients in order to advantage some kinds of ad
As a side effect we also have a brand-hostile
environment and a brand crisis. (Targeted advertising
media are designed for direct response and deceptive
ads, and don't work for the 60% of ad spending that
needs to go to brand building)
The big opportunity now is for reputation-based players—publishers and
brands—to use the defensive opportunities now afforded by browser privacy
improvements and by privacy regulations.
Part of this could be an "objection amplifier" to
balance out the "consent amplifier" effect of bad
consent UX. If I go to a publisher site, or brand
site, and they give me a meaningful choice on how
my data is used, the publisher is putting themselves
at a disadvantage if they respect my decision while
others get deceptive consent.
So how to handle the built-in disadvantage for honest consent requests?
If you capture a solid "I do not consent" from a
person, then don't waste it. Ask the person for a
digital signature on an objection to every DMP on the
Lumascape, and send it to the DMPs. And log it, and
use it when you're selling ads make sure to include
the point that "We have people on this site that
the DMPs don't have, and aren't allowed to. Want to
reach our audience? Talk to us."
Running a third-party processor to enable this is
one of the biggest opportunities for the post-creepy
Lumascape. Needs a TLA, though. OAP? Objection
Amazon stopped sending email receipts, to keep email services from targeting people with ads based on their Amazon shopping habits.
Google's Ads Data Hub restricts how advertisers can combine Google and non-Google data.
Facebook announced it would eliminate all third-party data brokers.
What do these companies have in common? They're
marketing's winners. Meanwhile, publishers festoon
their sites with consent management platforms that
capture consent for all surveillance marketing,
everywhere. They'll even get consent for tracking by
third parties that the publisher doesn't even use.
Why play to lose? If you run a trusted site in a
position to get consent and prove you got it, you
want fewer other companies getting that user's data,
So the obvious counterpart to the consent amplification carried out by CMPs is some kind of objection amplification.
If the user clicks something other than "OK" on the
GDPR consent dialog, don't just set their consent to
zero. That non-consenting user needs to have their
voice heard, not just filed away. Ask: Do you want
to deny tracking just by our site, or by all these
third parties? Then show them a list of Lumascape
firms, most of which look like they were named not
by branding experts, but by some guy in Florida who
mainly communicates by "finger guns". When the
user says, hell yeah, I don't want to be tracked
by all those companies either, then that's when the
objection amplification starts. Generate a Article
21 objection for every company you can think of,
get the user to sign off on them, and send them out.
(This is why it has to be a platform. Could be quite
a bit of verbiage here.)
Now the record of objections sent is a piece of
data for ad sales. "Buy ads here because x% of our
users can't even legally be targeted by those other
Internet platform companies play defense all the time.
Many researchers who study human behavior on the
Internet will point out that calls to "just delete
Facebook" are unrealistic for many users. A lot of
people depend on the company for family connections,
health-related support groups, or even employment.
Which, of course, should make deleting Facebook an
easy win from an economic signaling point of view. If
you can credibly stay off Facebook, you're signaling
that you have the skills, wealth, health, and
social capital not to need it. What could be better?
Why aren't more people signaling their fitness through
conspicuous lack of Facebook dependence? Two reasons.
If you delete your account, it's too easy for others to make fake accounts imitating you, so it looks like you're on there anyway.
The decision to #deleteFacebook is easily reversible. You could easily come sneaking back.
So the thing to do if you want to get signaling power
out of quitting Facebook is to not just delete your
account, but do two things.
keep your account live so that it keeps your name "squatted on" in Facebook-space.
take a credible action to lock yourself out (that is in compliance with the Facebook ToS, of course).
How about this? Get a "burner" SIM, make that the
one phone number for your account, then let me hang
on to the SIM for you. I'll periodically post a list
of everyone whose Facebook account is associated
with a SIM I hold, but I won't be able to log in.
I'll charge a monthly storage fee to keep the SIM
for you, but it only comes due when you reclaim it.
Why do advertisers keep sponsoring illegal activity on
big Internet platforms such as YouTube and Facebook?
Platforms are running so many copyright-infringing
copies, cryptocurrency scams, state-sponsored campaign
finance violations, and even functioning as the IT
department for genocide (d00d wtf?) that it's hard
to understand why so many good brands are still there.
A big part of the problem
is that even though platforms do
invest a lot of time and money in removing illegal
the advertisers never know. If you're a CMO making
decisions about where to spend your ad budget, your
experience of a highly customized social platform is
completely different from what most of your brand's
customers see. As a CMO, you see content from people
in your social and professional circles, and ads from
high-bidding advertisers who want to sell high-margin
items such as conferences and SaaS subscriptions.
You don't see as much of the bad stuff. Advertisers
pay the bills for illegal activity because they lack
the information they would need to stop doing so.
It's time for Internet platforms to stop
hiding this information.
One common growth hacking pattern on social and
collaboration sites is to build user profiles
incrementally. Capture just enough info to get the
user logged back in, then get them started using
the site. As they get into it, prompt them to fill
in more and more profile information. You have
probably seen this on new sites where you have to
make an account. FIXME: list of
good examples here.
People don't want to give up a bunch of information
up front before they see how good the site is. And,
I suppose, if the site is good enough that the person
thinks they'll spend more time on it, they're more
likely to provide correct information than all the
residents of "asdf" born on January 1, 1970.
But news sites don't take this approach. Instead of
trading a little value for a little information,
repeatedly, you get one big dialog asking you to
give up all your information before you even read
the first story.
Does the same incremental approach that
applies to data collection for social and
collaboration sites also apply to news sites?
Preliminary results from Global Consent
to indicate that yes, it does.
So here's the bargain. Right now, the web ad
business is set up to bid on ad impressions that
come with third-party data, way more than for
impressions without third-party data. So a trackable
bot impression on a fraud site can produce more ad
revenue for the fraud operator than an impression from
a privacy-sensitive user running Firefox Nightly or
Apple Safari produces for a legit site.
Yes, even though the privacy-sensitive user is more
likely to be human and interested in buying something
related to the topic of the site.
The opportunity to get a bargain is: instead of
relying on conventional programmatic ad buying, if
you do a little extra work to understand the audience
of specific sites, you can reach more of the humans
you're interested in.
Not every Firefox user who shows up on the Road
and Track site is going to buy a car this
year, but $1 worth of ad impressions there is
likely to reach more human car buyers than $1
spent programmatically—because you get a
higher fraction of humans for a lower price. and third-party data on who's a
likely car buyer is bogus anyway, but that's another
This opportunity is likely to go away as more agencies
figure it out, but right now it's a great chance to
get humans cheaper than bots.
If you block the tracking cookies that advertisers use to decide which ads to target you with, you'll start getting the low-budget, low-quality ads that show up in the absence of the targeting data that marks you as a desirable customer.
Before I turned on Enhanced Tracking
I was getting ads for stuff like cloud computing
services and luxury SUVs. Now, with Enhanced Tracking
Protection, am I going to get more ads for for FREE
nutritional supplements? You know, the offers where
you put in your credit card info for shipping
and then they keep billing you even after you try to
cancel? Or maybe I'll get offered a great deal on a
for-profit college program, or some predatory finance!
I can't wait.
It might be an inconvenience for
me to start getting the ads that
people get when they're too broke, or just too
for high-bidding advertisers to care about reaching
them. But the real problem is that legit sites are
running those ads in the first place.
signal-carrying model: all visitors to the same site on the same day see the same ad(s).
brand safety: advertisers choose sites, and site owners approve ads.
fraud resistance: ads sell by the day instead of by impression or click.
incentive to discover and support new sites the first advertiser to express interest in a site get to run their ad for free until another advertiser places a bid.
But there were still some problems.
Project Wonderful was just as vulnerable to ad blocking as regular adtech.
The audiences of sites using Project Wonderful were just as vulnerable to tracking as everyone else.
The second one is especially important. Why spend the
effort to pick, and run ads on, mutiple independent
sites in order to get your ad in front of the right
people, when you could just sign up for some user
tracking scheme? The people who control marketing
budgets need a problem, a trend, and a story in order
to shift money from one place to another.
What would it take to borrow and build on
the good parts of the Project Wonderful model
while taking steps to fix the problem of data
Avoid privacy-focused ad blockers by accepting the EFF DNT policy. Third parties that can pass EFF's Privacy Badger also tend to stay off other blocklists.
Offer unlimited CNAMEs, also to help beat list-based blockers.
Don't participate in paid whitelisting as a network, but individual sites that choose to do so could, for their own specific CNAME.
Good metrics on tracking protection adoption by the audience. Show advertisers that these users are hard to reach another way.
Include reverse tracking walls, tracking detection roadblocks, and A/B test alternate "turn off your ad blocker" messages to motivate users to get protected from cross-site tracking.
Limited, user-permitted data collection with clean consent management.
The hard part for an independent ad network is to
offer small advertisers something they can't get from
Google or Facebook. Access to a protected audience?
The farce of consent as currently deployed is probably doing more harm as it gives the misimpression of meaningful control that we are guiltily ceding because we are too ignorant to do otherwise and are impatient for, or need, the proffered service. There is a strong sense that consent is still fundamental to respecting people’s privacy. In some cases, yes, consent is essential. But what we have today is not really consent.
So long as a data collector can overcome sampling bias with a relatively small proportion of the consenting population, this minority will determine the range of what can be inferred for the majority and it will discourage firms from investing their resources in procedures that help garner the willing consent of more than the bare minimum number of people. In other words, once a critical threshold has been reached, data collectors can rely on more easily observable information to situate all individuals according to these patterns, rendering irrelevant whether or not those individuals have consented to allowing access to the critical information in question. Withholding consent will make no difference to how they are treated!
Is consent management even possible? Is a
large company that seeks consent from
an individual similar to a Freedom
And what's going on with Judge Judy and skin care
products?There are thousands of skin care scams on Facebook
and other places on the internet that falsely state
that their product is endorsed by celebrities. These
scams all advertise a free sample of their product
if you pay $4.95 for the shipping. Along the way,
you have to agree to the terms and conditions....The
terms and conditions are only viewable through a
link you have to click, which most of these people
Or Martin Lewis and fake bitcoin
He launched a lawsuit in April 2018, claiming
scammers are using his trusted reputation to ensnare
people into bitcoin and Cloud Trader "get-rich-quick
schemes" on Facebook.
The problem is that ad media that have more data,
and are better at facilitating targeting, are
also better for deceptive advertisers. Somehow an
ad-supported medium needs consent for just enough
data to make the ads saleable, no more. As soon as
excess consent enters the system, the incentive to
produce ad-supported news and cultural works goes
down, and the returns to scamming goes up.
Another one of those employee happiness
is out. This kind of thing always makes me wonder:
what are these numbers really measuring?
It seems like happiness ratings by employees would
expected cost of retaliation for low scores
expected benefit of management response to low scores
The expected cost of retaliation is the probability
that an employee's ratings will be exposed to
management, multiplied by the negative impact that the
employee will suffer in the event of disclosure. An
employee who believes that the survey's security has
problems, that management will retaliate severely in
the event of disclosure, or both, is likely to assign
high scores to management.
Some employers make changes in compensation or
working conditions when they fail to achieve well on
happiness (or employee engagement) surveys. If
an employee believes that management is likely to
make changes, then the employee is likely to assign
low scores in areas where improvement would have the
greatest impact on them.
An evil company where management makes an
effort to de-anonymize the happiness survey results,
retaliates against employees who give low scores,
and will not make changes to improve scores, will
appear to have high employee happiness.
A good company where management does not
retaliate, and will make changes in response to low
scores, will appear to have low employee happiness.
Of course, this all changes the more that people
figure out that getting low happiness scores means
that you have responsive management.
privacy tools, such as
(list-based protection) and Privacy
(behavior-based protection), that block some ads
as a side effect. This is a small category now compared to ad blocking in general,
but is likely to grow as browsers get better
at privacy protection, and try new performance
to improve user experience.
deceptive blockers, which are either actual
or operate a paid whitelisting
The best-known paid whitelisting scheme is
Acceptable Ads from Adblock Plus, which is
disclosed to any user who is willing to scroll down
and click on the gray-on-white text on the Adblock
Plus site, but not anywhere along the way of the
default extension install process.
So any ad blocker detector is going to be hitting
at least three different kinds of tools and possibly
six different groups of users.
People who chose and installed a full-service blocker
People who chose to protect their privacy but did not specifically choose to block ads
People who may have chosen their browser for its general privacy policies, but got upgraded to a specific feature they're not aware of
People who chose to block ads but got a blocker with paid whitelisting by mistake
People who chose to "install an ad blocker" because it got recommended to them as the magic tool that fixes everything wrong with
People who are deliberately participating in paid whitelisting. (Do these people exist?)
Sometimes you need to match the message to the
audience. Because sites can use tools such as
get a better picture of what kind of protection,
or non-protection, is actually in play in a given
session, we can try a variety of approaches.
Is silent reinsertion appropriate when the ad
is delivered in a way that respects the user's
personal information, and the user has only chosen
a privacy tool but not an ad blocker?
When the user is participating in paid
whitelisting, can a trustworthy site do better
with an appeal based on disclosing the deception
For which categories of users are the conventional,
reciprocity-based appeals appropriate?
Where is it appropriate to take no action in a user
session, but to report to a browser developer that
a privacy feature is breaking some legit data collection
updated 18 Oct 2018: add dates and locations, reorder.
Good news. It looks like we're having a
consent management mini-conference as part of Mozfest
next month. (I'm one of the organizers for the Global Consent Manager session, and plan to attend the others.)
Audience are engaged with an activity where they’re given clauses from a curated list of clauses from real T&Cs and they express whether it should have been mentioned outright or not. We have a discussion about digital privacy and ways to curb exploitation. Visitors try out our browser plug-in that filters out most important clauses from any T&C.
This workshop offers an holistic space to create digital tools and environments in which consent underlies all aspects, from the way they are developed, to how data is stored and accessed, and to the way interactions happen between users. Prototyping consent into our tools will make them more fair and unbiased. Using a specific designed prototyping loop, teams quickly hypothesize, develop, test and assess ideas consentful data prototypes.
We will discuss how consent management on the web works today, and the relationship between user privacy and reputable content providers. Web users face a confusing array of data sharing choices, and click fatigue can lead to poor user experience and possible inadvertent selection of options that do not match the user’s privacy norms.
The sole focus of the Fastblock feature is to restrict the loading of trackers. It monitors trackers waiting for the first byte of data since the start of navigation of the current tab’s top level document. If this is not received within 5s, the request is canceled. If any bytes are received, the 5s timer is stopped. In some of the experimental branches, a few tracker requests are whitelisted, and do not have this monitoring. These include resources known to cause breakage, such essential audio/video, and commenting platforms.
Just going by basic economics, ads placed with more
information about me are going to carry less signal
and more deception than ads placed only by what page
they're on. Now I'm wondering how well "slow loading
ads" correlate with "deceptive ads". Are slow loading
ads slow because they depend on a bunch of complex
RTB stuff? Can less creepy ads be faster?
Turns out that fast-moving, hungry misinformation
operations are better at YouTube than YouTube is.
This is not too much of a surprise. Resting and
vesting makes you stupid. It's like a resource
for code. Sometimes I think I
should start an imposter syndrome cure sanitorium.
Main activity for patients will be watching the
sites built by the so-called tech elite. (Look who's
still typing = sometimes instead of == or ===.)
Anyway, what do you do when you want to send someone
a link to a YouTube video, but you don't want the
engagement anti-features to kick in?
How about addressing the problem on the client side?
Here's an experimental Firefox
that will remove the recommended videos sidebar
and keep you on the same video even if the pwned
engagement algorithm tries to auto-play a
different one. So if I send a family member a link,
I can have fewer worries that they'll end up in a
Using the “Custom Audiences from your Customer List” product specification, advertisers can upload certain customer lists to Facebook – based on e.g. emails, phone numbers, Facebook user IDs or mobile advertiser IDs – from their CRM database, which are first ‘hashed’, meaning they are transformed into checksums (hash values), and compared with other checksums generated from Facebook user data. If the checksums match, then existing and potential customers can be deliberately shown targeted ads on Facebook, Instagram and in apps and on mobile websites via Audience Network. Facebook also provides this feature for retailers, calling it “Offline Custom Audiences”.
This is going to be an interesting natural experiment.
Will ad-supported media do better in jurisdictions
where Facebook Custom Audiences are not available?
If Facebook advertising represents an increase
in marketing budgets, then probably not so much.
If Facebook advertising squeezes out other items
from the marketing budget, then this could be a win.
(My best guess is that small companies are spending
more on marketing because Facebook is easy and
self-service, but Facebook is just one of many places
that larger companies can spend. The ease of use
of Facebook from the advertiser side makes Facebook
ads a contender for small businesses that would have
trouble dealing with a legit site.)
And it's hard to address the problem of creepy stuff
on the Internet without talking about housing costs.
If the California powers that be can drive up prices
to the point where workers need a top-10-percent
income for what would have been a basic middle-class
lifestyle elsewhere, then it's easier to pressure
them into more questionable practices.
What's the best defense against surveillance
marketing? In some cases, another surveillance
marketer. Just like hackers lock up a vulnerable
system after they break in to protect against other
hackers, surveillance marketers who know what
they're doing are helping to protect users from
other companies' data collection practices.
Facebook:Late last week Facebook announced it
would eliminate all third-party data brokers from its
platform. It framed this announcement as a response
to the slow motion train wreck that is the Cambridge
Analytica story. Just as it painted Cambridge as a
“bad actor” for compromising its users’ data,
Facebook has now vilified hundreds of companies who
have provided it fuel for its core business model,
a model that remains at the center of its current
travails.Newco Shift | Facebook: Tear Down This
(And Facebook even runs a Tor hidden
[A]n ad tech lobby group called ‘IAB Europe’ published a new research study that claimed to demonstrate that the behavioural ad tech companies it represents are an essential lifeline for Europe’s beleaguered publishers....the report claimed that behavioural advertising technology produces a whopping €10.6 billion in revenue for Europe’s publishers.
Surely, the ad tech lobby argued, Parliament would permit websites to use “cookie walls” that force users to consent to behavioural ad tech tracking and profiling their activity across the Web. The logic is that websites need to do this because it is the only way for publishers to stay in business.
We now know that a startling omission is at the heart of this report. Without any indication that it was doing so, the report combined Google and Facebook’s massive revenue from behavioural ad tech with the far smaller amount that Europe’s publishers receive from it.
The IAB omitted any indication that the €10.6 billion figure for “publishers” revenue included Google and Facebook’s massive share too.
That's not the only startling omission. The most
often ignored player in the ePrivacy debate is
adtech's old frenemy, the racket that's the number two
source of revenue for international organized crime
and the number three player in targeted behavioral
And ePrivacy, like browser privacy
is like an inconveniently placed motion detector
that threatens to expose fraud gangs and fraud-heavy
The same tracking technologies that enable the
behavioral targeting that IAB defends are the
tracking technologies that make adfraud bots possible.
Bots work by visiting legit sites, getting profiled
as a high-value user, and then getting tracked while
they generate valuable ad impressions for fraud sites.
Adfraud works so well today because most browsers
support the same kind of site-to-site tracking
behavior that a fraudbot relies on.
And regulations that make it easier for users to
protect themselves from being followed from one site
to another are another source of anti-fraud power.
If bots need to opt in to tracking in order for fraud
to work, and most users, when given a clear and fair
choice, don't, then that's one more data point that
makes it harder for adfraud to hide.
Publishers pay for adfraud. That's because adfraud is
no big secret, and it's priced into the market. Even
legit publishers are forced to accept a fraud-adjusted
price for human ad impressions. I'm sure that not
every adtech firm that opposes ePrivacy or browser
privacy improvements is deliberately tolerating
fraud, but opposing privacy regulations and privacy
technologies is the position that tends to conceal
and protect fraud. That's the other omission here.
updated 27 Aug 2018: copy edits for clarity, add introduction.
In your blog
you point out that the news system has to work with
user privacy principles. Most of the conversation
is about putting into place a set of systems based
on opt-in tracking but it is not clear how the
principles will impact the opt-in tracking and consent
management. I'd like to hear more about that.
Don: The incentive from the browser side is clear
for independent browser businesses that don't have a
surveillance marketing business attached. What is
it that a big incumbent browser will have trouble
doing but that users clearly want?
Extensive user research indicates that users
prefer a browser that will protect them from
having their activity in one context follow them
over to another context, and they also want a clear
and non-confusing user experience. So this
sets up an opportunity for browsers. They can compete
over who can best manage user data in order to meet
people's norms and preferences on how that data
Browser management decisions being made day to day are
based on how to acquire users, and keep users once
they are already running a browser. So what are the
side effects of this new browser competitive area? Why
are publishers going to need to be concerned about it,
and where can they get some sustainable advantage from
it? And the answer is that when user data gets managed
in accordance with users' norms and preferences,
then sites that are trusted by users to use their
data have an advantage over untrusted sites. And the
biggest place this will show up immediately is in
ad fraud, because the way that fraud bots work is
they leak user data from high-value sites to fraud
sites. They do exactly what the mainstream browsers
do today in facilitating tracking the user from high
value sites to low value sites.
Can the platform that connects permissed data
function now without anything more than GDPR or do you
see the need for more detailed privacy protections?
There is a need for comprehensive privacy policies
across sites because it is prohibitively expensive
for small news organizations to keep up with all
the details of all the privacy tools and requirements
across every possible tech platform and jurisdiction.
One major US publishing company was
unable to do GDPR compliance for their
sites so they ended blocking a whole bunch of US news
sites for European visitors.
When I see a site that isn't able to comply with GDPR,
I see a site that is getting its clock cleaned by
data leakage. Every single person using that site
is getting their data leaked out to other places so
they can get reached without the original publisher getting any benefit
If you can't even do GDPR as a big publishing company
how are you going to be able to do California,
Europe, and India as a small independent web site,
or do clean user-data collection across Firefox,
Safari, and other browsers out there?
This is good. We are talking about creating trusted news sites based on the way they work with user data.
The ways users indicate trust with a site are
potentially all over the place. They might say they
trust their local public radio station by pledging
and getting a coffee mug. They might indicate they
trust their local news site by filling in a traffic survey
saying what neighborhood they live and work in. A
user might indicate trust for a site by leaving a
comment or a letter to the editor. Many
different platforms all have a small
view into user trust and all have an opportunity to
capture some kind of consent for data use, but there's
no good way to integrate all those. And if you do
it through a conventional surveillance marketing
mechanism you may be doing it in a way that doesn't
even capture consent. User data without consent
is not going to be sustainable on a regulatory or
Your typical news site has 50-70 third-party domains showing
up on it, and every one of them has a separate privacy
policy, all written by different lawyers with
the objective of staying out of trouble while giving
you the least privacy possible. So if you are a publisher
running some skeevy tracker on your site without the right consent,
future browsers are going to look at that and say there
is no way this user has given consent to this firm
from a dark corner of the Lumascape, I'm not going
to reveal any user data to that firm.
So what you end up is news sites with reputable
content not having the right consent bits set
in order to be able to prove that they have
a valuable audience. We saw this with GDPR and
unconsented impressions coming into real-time bidding
platforms. Some of those impressions are coming in without
the right consent bits set which means they aren't
going to get bids from some advertisers. Even users
who trust the site are not producing ad impression
value for the publisher they trust, and that's a
big problem. That's the first thing that publishers are
going to be concerned about with browser privacy improvements.
Without all the non-permissioned data we are used to
seeing attached to the impressions, those are not going to
have much value. Publishers are going to be selling remnant
impressions on a quality site because they don't have
Let's imagine we have a way to collect opt-in data
from a variety of different news sites, and also the
merchants and apps that supply those news sites with
services. It provides uniform opt-in rules to gather
that data and then is able to serve those opt-in users
with different types of content. Sort of an opted-in
Taboola. If that kind of platform were created would
there still be a need for privacy policies as well,
or would the consent management system replace that
need for the privacy policies?
Consent strings in Apple Safari are managed like any
other tracking state would be. So the platform has to
be aware of the policies and limitations of all the
privacy tools that feed into a user data collection
opportunity. Privacy Badger is a niche tool. They
look for a specific third-party tracking policy. That
is not as important for mainstream adoption directly
but some of the list-based tools out there like
Disconnect, which Firefox feeds off, can be informed
by trackers detected by Privacy Badger.
A common policy has a real role because it lets you
address incompatibilities one at a time instead of
having a big n by m matrix of site privacy policies
and privacy tool policies. It is kind of like open
source licenses. If you go to build a project
and want to keep your licenses compatible, it is
way easier if you have a single software license
across that ecosystem or at least a set of compatible
That is super helpful.
This platform needs to come into existence in
an incremental way. Many local sites are signed up with
Google and use Google Tag Manager for
their ad serving. Google has a lot of the needed
functionality built out for their European customers,
so the process of moving from unpermitted user data
sharing to permission-based user data sharing can
be done incrementally if you work it the right way.
Sites can use the Google tools according to their
design, taking features that have been developed
for compliance in Europe and applying those
features to another need, like an off-label
for GDPR compliance features. It's like discovering
you can cure some ulcers by taking a specific dose of
antibiotics. This is a big opportunity for Google as well.
There is a need for a comprehensive policy because
it is too complicated to do it across all the
platforms, and even if there is a private label way
to create some kind of opt in, how do you rely upon
consent management? Like an open source license,
language that allows you to cross all these different
juristictions, tools and browsers.
Yes, and when this common policy is out there
and able to be part of a discussion with tool and
browser developers, that policy will inform the future
decisions made by those developers. People will say
I don't really want my tool to block permitted data
sharing with trusted sites, how do I make my tool
better reflect what the users are doing?
(update 20 Nov 2018: copy edit, add a link to Dr. Johnny Ryan's CNIL article)
Today's web advertising relies on 1990s browser
behavior—most browsers fail to protect
users from being tracked from site to site,
and advertisers are used to taking advantage
of that old defect. But because browsers do user
and respond to what users want, that's changing.
Browsers are making it harder to track users from site
to site without their permission. Along with privacy
regulations, this change is creating an opportunity
for new, "post-creepy" web advertising that:
works with user privacy principles
has fewer of the negative externalities of targeted ads
gives more market power and revenue to sites that users choose to trust
The big opportunity is in enabling publishers to
reclaim control over their own audience data,
not in establishing a new choke point such as
a cryptocurrency or paid whitelisting program.
(If publishers wanted to give up control to a
tech firm, they can do that already.) Most of the
development that is needed here can be provided by
third parties that publishers are already using,
because third parties are coming into compliance with
privacy regulations. For example, Google Tag Manager
already has the required functionality in order to
comply with the European GDPR.
The missing piece is a
way for sites to collect and enough user data to show
advertisers that the site is trusted by human users,
in order to make the ads on that site saleable.
In the new environment, user data
alone is insufficient—data must
be accompanied by the consent required
to use it. And that can't be just "click
to make this dialog go away and consent to adtech as
Both regulators and browser developers are going
to require real consent. So the web advertising
system needs to evolve away from dependence on
large quantities of un-permissioned data towards the
ability to use less data accompanied by permission.
(Post-creepy web ads won't be able to swim in
abundant unpermissioned data with the nutria of
the Lumascape. Consent is scarcer than raw data,
and only data accompanied by consent is safe to use.
Publishers will have to collect and conserve every
drop of data, like _muad'dib_, the desert mouse
of Arrakis.) Possible sources include:
Differences in browser behavior between trusted and untrusted sites
Consent management is a tricky problem. IAB Europe
is doing some work toward addressing it, with the
open-source Transparency and Consent Framework.
Although existing implementations are designed
to nudge the user into not-transparent data
practices, and are not yet getting real consent,
this framework does provide a starting point on which
to build consent management that both implements
the user's preferences accurately and provides a
smooth user experience. (more info: Global Consent
Global Consent Manager is a client-side component
that you can try in Firefox now, that can interact
with server-side data platforms.)
In principle, privacy regulation and browser privacy
improvements have the potential to lower the return
on investment on creepy tracking, and raise the
return on investment on building reputation and
getting consent. But publishers, who have the
reputation to get users to agree that they have the right to use data, don't have
the development budgets or time to build the tools
for data gathering.
User data and opportunities to get to get consent
are everywhere, in CMSs, other software, and
in third-party services. The missing piece is a
platform that will collect data, with permission,
from all the above sources and
run either on the publisher's own infrastructure
or as a third-party service so that small publishers don't need to
touch the CMS or deploy and manage a new service
comply with current and future data protection regulations
work with and anticipate privacy improvements in browsers
provide reports and APIs in a usable format for advertisers and agencies
Many of today's ad agencies, even sympathetic ones,
won't come to the new system by choice, because it
won't allow for tracking desirable audiences to
cheap sites. We can assume that advertisers and
agencies will ignore the new system until they see
that it’s a way to reach a significant audience
that they can’t reach in other ways, today, and
the mainstream tracking-protected web audience in
the near future.
I'll be at the Voice of Blockchain
in Chicago on Friday and Saturday. Two panels:
"Journalism: Incentivizing the Truth" on Friday, and
"Crowdsourcing, Bounties, and Democratizing Access
to Jobs" on Saturday.
So what does blockchain have to do with incentivizing
One important reason that we have standards
of fairness and accuracy in news is that news
organizations sell advertising to mainstream
Brands that want to be able to sell to everyone,
not just one side of a political or social issue.
High-reputation news sites don't respond individually
to the demands of advertisers, but the principles
on which high-reputation news sites operate have
developed in parallel with the needs of brand safety.
On today's web, reputation-based advertising
is not so much of a thing. Adtech
firms place ads from legit brands on brand-unsafe
usually without anyone at the
brand knowing about it. Faris Yakob points
By squeezing fees and margin procurement put
incredible pressure on agency principals, who have
obligations to hit certain targets from the holding
companies. Rock meet hard place. Thus new sources
of revenue were found, in media rebates, or opacity,
or programmatic trading desks, or production fixing -
all conflicts of interest that can be leveraged to
try to appease both masters...for a time.
When agencies try to get ad impressions in front of
the desired audience at a bargain price, a lot of
ad money ends up with fraudulent or brand-unsafe
sites. Even legit sites end up running 50
to 70 tracking scripts because they lack the market
to protect their audience from being tracked to
Incentivizing journalism depends on helping users
protect their personal information from being tracked
from one site to another. As users get the tools
to control who they share their information with
(and they don’t want to leak it to everyone) then
the web advertising business has to transform into
a reputation contest. Whoever can build the most
trustworthy place for users to choose to share their
Blockchains are slow and expensive compared to
databases or conventional payment systems, but cheap
compared to trust networks. As browsers take a more
active role in protecting users from third-party
tracking, reputable news sites will need a new
technical infrastructure for Internet advertising
that accurately reflects the trust relationships
between brands, agencies, sites, and users.
What about "Crowdsourcing, Bounties, and
Democratizing Access to Jobs"? This is a fun area.
Learn market design
is the new learn to code.
Developers would prefer to release open source
software at a high quality level and get paid for it.
Many users would prefer to use software at a higher
quality level if they could pay for it. The current
software market, though, incentivizes companies
to release at a low quality level, in order to
get early adoption and build network effects.
One approach is to build a new kind of market,
one that allows users to hedge their software
quality risks while enabling developers to trade
on the likelihood of bug fixes. More info: Rao et
Targeted advertising (where the
browsing habits of consumers are tracked and then
used to provide them with more specific adverts)
was another commonly cited source of anxiety,
with many respondents feeling powerless to stop the
intrusion. One described how “a lot of my particular
anxieties came into full swing when I learned more
about how online advertising works. When I noticed
Facebook ‘Like’ buttons on unrelated pages and
when ads follow me around. The feeling that I had
no privacy was claustrophobic and has led to so many
anxiety attacks I have lost count”.
The link from the Chipotle ad redirected consumers
to an Amazon gift card scam that presents the viewer
with a fraudulent message that is intended to prompt a
click to steal the user’s personal information.
Privacy regulation (starting with the European Union, California and India).
Some regulations will have impact outside their own juristictions when
companies choose not to write and enforce separate second-class
privacy policies for users not covered by those regulations.
New "browser wars" over which browser can best
implement widely-held user norms on sharing their
personal information. (Web browsers are good at
showing you a web page that looks the same as
it does on the other web browsers. Why switch
browsers? For many users, because one browser
does better at implementing your preferences on
personal data sharing.)
Right now the web is terrible as a tool for brand
But the web doesn't have to get better at signaling,
or less fraudulent,
than print or broadcast.
In a lot of places the web just has to be better than
Fixing web advertising is not one big coordination
problem. People who are interested in web
advertising, from the publisher and ad agency point
of view, have a lot of opportunities for innovative
and remunerative projects.
Browser privacy improvements, starting
with Apple Safari's Intelligent Tracking
are half of a powerful anti-fraud system.
The better that the browser protects the user's
information from leaking from one site to another,
the less it looks like a fraudbot. How can
publishers and brands build the other half, to
shift ad budgets away from fraud?
are an increasingly well-understood user segment,
to ongoing user research. For some
brands and publishers, the best strategy
may be to continue to pursue "personalization
the approximately one-third of users who don't
object to having their information collected
for ad targeting. Other brands have more appeal to
mainstream, vaguely creeped out, users, or to
users who more actively defend their personal info.
How can "conscious chooser" research inform brands?
Regulation and browser
privacy improvements are making contextual
more imporant. Where are the opportunities to reach human audiences
in the right context? Where does conventional programmatic advertising miss out on high-context,
signalful ad placements because of gaps in data?
As sharing of user data without permission becomes
less common, new platforms are emerging to enable
users to share information about themselves
by choice. For example, a user who comments
on a local news site about traffic may choose
to share their neighborhood and the mode of
transportation that they take to work. User data
sharing platforms are in the early stages, and
agencies have an opportunity to understand where
publishers and browsers are going. (Hint: it'll
be harder to get big-budget eyeballs on low-value
or fraudulent sites.) Which brands can benefit
from user-permissioned data sharing?
(Complementary to data sharing issues) Consent
management is still an unsolved
problem. While the Transparency and Consent
provides a useful foundation to
build on, today's consent forms are too
annoying for users and also make it difficult and
to do anything except select a single
all-or-nothing choice. This doesn't accurately
the user's data sharing choices. The first
generation of consent management is
getting replaced with a better front
end that not only sends a more accurate
consent decision, but also takes less time and
attention and is less vulnerable to consent string
How will accurate and convenient consent
management give advantages to sites and brands
that users trust?
Workshops are in progress on all this stuff.
(Mail me at work if you
want to help organize one.) Clearly it's not all just
coming from the browser side—forward-thinking
people at ad agencies and publishers are coming up
with most of it.
Inner procrastinator: HEY LET'S FIND SOME K3WL ARTICLES TO READ ON THE INTERNET
Sense of duty: No, must update project status. (Ctrl-T to open new tab)
Web browser: HEY WEREN'T YOU LISTENING TO INNER PROCRASTINATOR JUST NOW? HERE IS SOME RECOMMENDED CONTENT
Me: Preferences → Home → Firefox Home Content. Uncheck everything except "Web Search" and "Bookmarks".
Anyway, happy Friday. Since you're
already reading blogs, you might as well
read something good, so here is some stuff
that the RSS reader dragged in. (My linklog
is no longer getting posted to Facebook because
so if you were clicking on links
from me there you will have to figure
something else out. The raw linklog is:
Ad selection, delivery, reporting: The collection of information, and combination with previously collected information, to select and deliver advertisements for you, and to measure the delivery and effectiveness of such advertisements. This includes using previously collected information about your interests to select ads, processing data about what advertisements were shown, how often they were shown, when and where they were shown, and whether you took any action related to the advertisement, including for example clicking an ad or making a purchase. This does not include personalisation, which is the collection and processing of information about your use of this service to subsequently personalise advertising and/or content for you in other contexts, such as websites or apps, over time.
And here's the list of vendors with a "3" in their legIntPurposeIds:
Today’s web advertising is mostly a hacking
contest. Whoever can build the best system to take
personal information from the user wins, whether
or not the user knows about it. Publishers are
challenging adfraud and adtech hackers to a hacking
contest, and, no surprise, coming in third.
Mainstream browsers, starting with Apple
are doing better at implementing user
preferences on tracking. Most users don't
to be "followed" from one site to
another. Users generally want their
activity on a trusted site to stay
with that trusted site. Only about a third of
prefer ads to be matched to them, so browsers are putting more
emphasis on the majority's preferences.
is being updated to better reflect user
expectations and to keep up with new tracking
As users get the tools to control who they share
their information with (and they don’t want
to leak it to everyone) then the web advertising
business is transforming from a hacking contest into
a reputation contest. The rate-limiting reactant
for web advertising isn't (abundant and low-priced)
user data, it's the (harder to collect) consent
required to use that data legally. Whoever can build
the most trustworthy place for users to choose to
share their information wins. This is good news if
you're in the business of reporting trustworthy news
or fairly compensating people for making cultural
works, not so good news if you're in the business of
tricking people out of their data.
Federated paywall systems are not just yet another
attempt at micropayments, but also have value as a
tool for collecting trust. The user's willingness
to pay for something is a big trust signal. A small
payment to get past a paywall can produce a little
money, but a lot of valuable user data and the consent
bits that are required to use that data.
The catch is to figure out how to design federated
paywalls so that the trusted site, not the paywall
platform, captures the value of the data, and so
that the platform can't leak or sell the user's
data consent outside the context in which they
gave it. In the long run, a consent system that
tries to hack around user data norms to rebuild
conventional adtech is going to fail, but not
before a lot of programmers lose a lot of carpal
tunnels on privacy vs. anti-privacy coding,
and a lot of users face a lot of frustrating
consent dialogs. Browser improvements and court
cases will filter deceptively collected consent
out of the system.
Consent bits are a new item of value that needs
new rules. The web ad business is not going to be
able to sell and and sync consent bits the same way
that it handles tracking cookies now. Consent bits
are not a "data is the new oil" commodity, and can
really only move along trust networks, with all the
complexity that comes with them. New tools such as
federated paywalls are an opportunity to implement
consent handling in a sustainable way.
(Update 18 Aug 2018: Fix an error to be consistent with the source quoted.)
(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)
The good news is that interesting
competition among web browsers is
just because of ongoing performance improvements
in Firefox, but also because of Apple Safari's
good work on protecting users from some kinds of
cross-site tracking by default. Now the challenge
for other browsers is to learn from the Safari work
and build on it, to even more accurately implement
the user's preferences on sharing their personal
information. According to research by Tini Sevak at
36% of users are more likely to engage with
adverts that are tailored to them, while 55% are
creeped out by personalized ads. The browser
has to get its data sharing settings right for the
individual user, while minimizing the manual settings
and decision fatigue that the user has to go through.
A short-term problem for sites, though, is that the
current price for highly tracked ad impressions
facilitated by cross-site tracking is still way
above the price of impressions
delivered to users who choose to protect themselves.
Tim Peterson, on Digiday, covers the natural
experiment of GDPR consenters and non-consenters:
If an exchange or SSP declines to sign the agreement, it is limited to only selling non-personalized ads through DBM. Those generic ads generate less revenue for publishers than personalized ads that are targeted to specific audiences based on data collected about them. Some publishers that are heavily reliant on DBM have seen their revenues decline by 70-80 percent since GDPR took effect because they were limited to non-personalized ads, said another ad tech exec.
users are more likely to share information with a site they trust
But in the short term, what can browsers do to help address the market dislocation from the user data crunch?
One possibility is to take advantage of an important
side effect of browser privacy improvements:
better anti-fraud data.
Today, unprotected browsers and fraudbots are hard
to tell apart. Both maintain a single "cookie jar"
across trusted and untrusted sites. For fraudbots,
cross-site trackability is not a bug as it is for a
human user's browser—it's a feature. A fraudbot
can only produce valuable ad impressions on a fraud
site if it is somehow trackable from a legit site.
As browser users start to upgrade to nightly releases
that include more protection, though, a trustworthy
site's real users will start to look more and more
different from fraudbots. Low-reputation and fraud
sites claiming to offer the same audience will have
a harder and harder time trying to sell impressions
to agencies that can see it's not the same people.
This does require better integration with anti-fraud
tools, so it's something sites and anti-fraud vendors
can do in parallel with the brower release process.
Can the anti-fraud advantages of browser privacy
improvements completely swamp out the market effects
of reducing cross-site trackability? Depends on how
much adfraud there is. We don't know.
If an exchange or SSP declines to sign the agreement, it is limited to only selling non-personalized ads through DBM. Those generic ads generate less revenue for publishers than personalized ads that are targeted to specific audiences based on data collected about them. Some publishers that are heavily reliant on DBM have seen their revenues decline by 70-80 percent since GDPR took effect because they were limited to non-personalized ads, said another ad tech exec. That revenue drop has put pressure on exchanges and SSPs to sign Google’s consent agreement lest their publishers move their inventory to other platforms that can run DBM’s personalized ads on their sites, the second exec said.
A lot of those "specific audiences" are,
of course, adfraud bots. Fraud hackers
are better at adtech than adtech firms
are. So ads
shown to bots, on shitty sites, are going for more
than ads seen by humans on legit sites.
Meanwhile, tracking-resistant, personalization-averse
readers are overrepresented in some customer
categories. Web developers are a good example. (40%
protected based on recent data from one popular site.)
Of course, today's web ad system is based on tracking
the best possible prospect to the cheapest possible
site, so it won't be easy to take advantage of this
nice piece of market inefficiency. First step is
figuring out how well protected the people you want
to reach are.
Recent question about futures markets on software bugs: what's the business model?
As far as I can tell, there are several available
models, just as there are multiple kinds of companies that can
participate in any securities or commodities market.
Oracle operator: Read bug tracker state, write
futures contract state, profit. This business would
take an agreed-upon share of any contract in exchange
for acting as a referee. The market won't work without
the oracle operator, which is needed in order to assign
the correct resolution to each contract, but it's possible that a single market
could trade contracts resolved by multiple oracles.
Actively managed fund: Invest in many bug futures
in order to incentivize a high-level outcome, such
as support for a particular use case, platform,
or performance target.
Bot fund: An actively managed fund that trades automatically,
using open source metrics and other metadata.
Analytics provider: Report to clients on the quality of
software projects, and the market-predicted likelihood
that the projects will meet the client's maintenance and
improvement requirements in the future.
Stake provider: A developer participant in a bug futures market
must invest to acquire a position on the fixed side of a contract.
The stake provider enables low-budget developers to profit from
larger contracts, by lending or by investing alongside them.
Arbitrageur: Helps to re-focus development
efforts by buying the fixed side of one contract
and the unfixed side of another. For example, an
arbitrageur might buy the fixed side of several
user-facing contracts and the unfixed side of the
contract on a deeper issue whose resolution will
result in a fix for them.
Arbitrageurs could also connect bug futures to other kinds of markets, such
as subscriptions, token systems, or bug bounties.
Despite theoretically adverse conditions, we find these markets are relatively efficient, and improve upon the forecasts of experts at all three firms by as much as a 25% reduction in mean squared error.
(This paper covers a related market type, not
bug futures. However some of the material about
interactions of market data and corporate management
could also turn out to be relevant to bug futures
Pipeline monument in Cushing, Oklahoma: photo by Roy Luck for Wikimedia Commons. This file is licensed under the Creative Commons Attribution 2.0 Generic license.
Companies don't advertise on sites like YouTube, sites
teeming with copyright infringers and nationalist
extremists, because those companies are run by
copyright infringers or nationalist extremists.
Marketing decision-makers are incentivized to
play a corrupt online advertising game that rewards
them for supporting infringement and extremism.
So the trick here is to help people move marketing
money out of bad things
(negative externalities) and
toward good things (positive
externalities). We know that
YouTube is a brand-unsafe shitshow because Google
won't advertise its own end-user-facing products and
services there without a whole extra layer of brand
Big Internet companies are set up to insulate
decision-makers from the consequences of their own
online asshattery, anyway. The way to affect those big
Internet companies is through their advertisers.
So how about a tweak to Article 13? Instead of putting
the consequences of infringement on the "online
content sharing service provider," put it on the
brand advertised. This should help in several ways.
Give legit services some flexibility.
If your web site's business model is anything
other than "get cheap eyeballs with other
people's creative work" or "get cheap eyeballs by
recommending divisive bullshit" then you
don't have to change a thing.
Incentivize sites to pay for new creative work, by making works covered by an
author or artist contract a more attractive place for paid advertising
than "content" uploaded by random users.
Make it easier for marketers who want to do
the right thing, by pointing out the risks of supporting
Move some of the risks of online advertising away from
the public and toward the people who can make a difference.
(This is a cleaned-up and lightly edited
version of my talk from Nudgestock
First I have to give everybody a disclaimer. This
is 100% off message. I work for Mozilla. I am NOT
speaking for Mozilla here.
If you follow Rory, you have probably heard a lot
about signaling in advertising, so I'm going to go
over this material pretty quickly. Why does Homo
economicus read magazine advertising but hangs
up on cold calls? To put it another way why is
every car commercial the same? You could shoot the
"car driving down the windy road" commercial with
any car. All that the car commercial tells you is:
if it was a waste of your time to test drive our car
then it would have been a waste of our money to make
this little movie about it.
There's a whole literature of economics and
math about signaling involving deceptive senders
and honest senders. With this paper, Gardete and
show that when the sender wants to really get a
message across, counter-intuitively the best thing
for the sender to do is deprive themselves of some
information about the receiver. If you're in the
audience and you know what the sender knows about you,
then you can't tell are they honestly expressing
their intentions in the market, or are they just
telling you what you want to hear? Anyone who used to read
Computer Shopper magazine for the ads
didn't read for specific information about all the
parts that you might put into your computer. You read
it to find out which manufacturers are adopting which
standards so you don't buy a motherboard that won't
support the video card that you might want to upgrade
to next year.
The feedback loop here is that when
brands have signaling power, then that
means market power for the publishers
that carry their advertising, which means
advertising rates tend to go up, which
means the publishers can afford to make
obviously expensive content. And when you
attach advertising to obviously
expensive content, that means more
signaling power. It's kind of a loop that
builds more and more value for the
Some people compare this to the
signaling that a bank does when they
build this monstrous stone building to keep your
Really, the stuff that a bank does, having a stone
building doesn't do any more for keeping money
in it than having a metal building or a concrete
building, but it just shows that they've got this big
stone building with their name on it so if they turned out to be
deceptive it would be more costly for them to
do it. That's the pure signaling model. But the other area
that we can see when we compare this kind of classic
signal-carrying advertising to online advertising,
the kind of ads that are targeted to you based on
who you are, is what's up with the norms enforcers?
Rory has his blue checkmark on
which means he doesn't see Twitter ads. I'm less
Internet Famous, so I still get the advertising on
Twitter. A lot of the ads that I get are deceptive
This is one. A company that's getting sued for lead
paint related issues is trying to convince residents
of California that government inspectors are coming to
their houses to declare them a nuisance. This is bogus
and it's the kind of thing that if it appeared in the
newspaper that everyone got to see then journalists
and public interest lawyers, and everyone else who
enforces the norms on how we communicate, would call
it out. But in a targeted ad medium this kind of
deceptive advertising can target me directly.
So let me show a little
here. What we're looking at is deceptive sellers
making a sale. When a deceptive seller makes a sale
that's a red line. When an honest seller makes a
sale, that's a green line. The little blue squares
are norms enforcers, and the only thing that makes a
norms enforcer different in this game from a regular
customer is when a deceptive seller contacts a norms
enforcer the deceptive seller pays a higher price
than they would have made in profit from a sale.
So with honest sellers and deceptive sellers evolving
and competing in this primordial soup of customers,
what ends up happening to the deceptive sellers that
try to do a broad reach and hit a bunch of different
customers is, well you saw them, they hit the norms
enforcers, the blue squares lit up. Advertisers who
are deceptive and try to reach a bunch of different
people end up getting squeezed out in this version of
the game. An honest advertiser like this little square
down here can reach over the whole board because they
don't pay the penalty for reaching the norms enforcer.
So what does this really mean for the real web? On
the World Wide Web, have we inadvertently built
a game that gives an unfair advantage to deceptive
sellers? If somebody can take advantage of all the the
user profiling information that's available out there,
and say, "oh I believe that these people are rural,
low-income, unlikely to be finance journalists,
therefore I'm going to hit them with the predatory
finance ads," does that cause users to pay less
attention to the medium?
Online advertising effectiveness has declined since
the launch of the first banner advertisement in 1994.
That's certainly not news. This is a slide that
appeared in Mary Meeker's famous Internet Trends
presentation, and as you can see blue is percentage of
ad spending, grey is percentage of people's time. So
TV is 36% of the time 36% of the money. Desktop web
18%, 20%, about right.
What's going on with print? Print is 9% of the
money for 4% of the time. Now you might say this is
just inertia, that that this year people are finally
just cutting back on spending money in print because
of people spending less time on print and it'll
eventually catch up. But I went back and plotted the
same slide from the same presentation going back to
and I've got time plotted across the bottom,
money plotted on the y axis, and what do we see
about print? Print is on a whole different trend
line. Print is on a trend line of much more value
to the advertiser per unit of time spent than these
other ad medium. My hypothesis is that targeting
breaks signaling and this means an opportunity.
Targeting means that when you see an ad
coming in targeted to you it's more like
a cold call. It doesn't carry credible
information about the seller's
intention in the market.
From the point of view of who has an incentive to to
support signal-carrying ad media instead, the people
who have an interest in that signal for attention
bargain in that positive feedback loop are of course
the publishers, high reputation brands that want to
be able to send that signal, writers, photographers,
and editors, people who get paid by that publisher,
and people who benefit from the positive externalities
of those signal carrying ads that support news and
So if the signaling model
is such a big thing then why are there
so many targeted ads still out there?
Let's have a look at, just to pick an example,
the Facebook advertising policy. As you know,
the Facebook advertising platform will let you
micro target individuals extremely specifically. You
can pick out seven people in Florida, you can pick
out everyone who's looking for an apartment who
doesn't have a certain ethnic affinity, that kind
of thing. But the one thing you're not allowed
to do with Facebook targeting is put anything
in your ad that might indicate how you're targeting
ads must not contain content that
asserts or implies personal attributes
You can't say, I know you're male or
female, I know your sexual orientation, I
know what you do for a living.
The ad copy has
to be generic even if the targeting can
be extremely specific. You can't even say
other. You can't say meet other singles
because that implies that the advertiser
knows that the reader is single. Facebook will let you target people with
depression but you can't reveal that you
know that about them. Aanother good
example is Target. They do targeting of
individuals who they believe to be
pregnant, but they'll pad out those ads
for baby stuff with ads for other types
of products so as not to creep everybody
Back to our shared interest in signal for attention
bargain. Pretty much everybody has an interest in
that original positive feedback loop of getting the
higher reputation for brands of getting reputation
driven publishers that'll build high quality content
for us. Writers and photographers have an interest in
getting paid, and people who are shopping for goods
are the ones who want the signal the most. All that
stands on the opposite side is behavioral tricks to
conceal targeting. Now I'm not going to say this as a privacy issue. I
know that there are privacy issues here but that is
really not my department. Besides, Facebook just
announced a dating site so they're going to breed
privacy preferences out of their user base anyway.
Can the web as an advertising medium be redesigned
to make it work better for carrying signal? We know
from the existence of print that this type of signal
carrying ad medium can exist. Print is an existence
proof of signal carrying advertising. We also know
that building that kind of an ad medium can't be
that hard because print was built when people were
breathing fumes from molten lead all day.
The prize for building a signal-carrying ad medium
is all the cultural works that you get when somebody
like Kurt Vonnegut can quit his job as manager of
a car dealership and write for Collier's magazine
full-time. This book is still on sale with the
And of course local news. Democracy depends on the
the vital flow of information of public interest. Some
people say that the problem with news and information
on the web is that it's all been made free, and
if people would just subscribe we could fix the
system. But honestly if if free was the problem,
then Walter Cronkite would have destroyed the media
business in 1962. It's a market design problem and
a signaling problem, not just a problem of who has
to pay for what.
And the web browsers got a bunch of things wrong in
the 1990s. There are certain patterns of information
flow that the browser facilitated, like third-party
tracking, where browsers enable some companies to
follow your activity from site to site, and data
leakage. Things that that just don't work according
to the way that people expect. Most people don't
want their activity on one site to follow them over to
another site, and the original batch of web browsers
got that terribly wrong. The good news is web browsers
are getting it right, and web browsers are under
tremendous pressure now to do so. As a product the
web browser is pretty much complete and working and
generic. The whole point of a web browser is it shows
web sites the same as all the other web browsers do,
so there's less and less reason for a user to want
to switch web browsers. But everybody who is trying
to get you to install a web browser needs for there
to be a reason, so the opportunity for browsers is to
align with those interests of users that the browser
wasn't able to pick up on previously.
At Mozilla some user researchers recently did a
study on users with no ad blocker installed and
users within the first few weeks of installing an ad
Anybody want to guess on the increased engagement? How
much more time those ad blocker users spend with that
same browser than the non ad blocker users? Anybody
shout out a number. All right, 28%. From the point
of view of the browser those kinds of numbers, moving
user engagement in a way that helps that browser meet
its goals, that's something that that the browser
can't ignore. So that means we're going from the old web
game where everyone tries win by collecting as much data on
people can without their permission to a new game
in which the browser, high reputation publishers,
and high reputation brands are all aligned in trying
to build enough trust to work on information that
users choose to share.
I know when I say information that users choose
to share you're going to think about all these
GDPR dialogs and I know I've seen these too, and
they're just tons of companies on these. To be
honest, looking at some of these company names it
looks like most of them were made up by guys from
Florida who communicate primarily by finger guns.
Users should not have to micromanage their consent
for all this data collection activity any more than
email users should have to go in and read their SMTP
headers to filter spam. And really if you think
about what brands are, it's offloading information
about a product buying decision onto the reputation
coprocessor in the user's brain. It's kind of like
taking a computational task and instead of running it
on the CPU in your data center where you have to to
pay the power and cooling bills for it, you offload
it and run it on on the GPU on the client. It'll
run faster, it'll run better, and the audience is
maintaining that reputation state.
The future is here, it's just not very
evenly distributed, as William Gibson
This picture is the cyberpunk of the
Today all of that stuff he's carrying, his video
camera, his laptop, his scanner, all that stuff's on
a phone and everybody has it.
Today, the privacy sensitive users, the ones who are
already working based on sharing data with permission,
they're out there. But they're in niches today. If you
have a relationship with those people now, then now
is an opportunity to connect with them, figure out
how to build that signal carrying advertising game,
and and create a reputation based advertising model
for the web. Thank you very much.
Are there parallels between the rise of Worse Is
in software and the success of the
"uncreative counterrevolution" in advertising?
(for more on that second one: John Hegarty:
Creativity is receding from marketing and data is to
The winning strategy in software is to sacrifice
consistency and correctness for simplicity. (probably
because of network effects, principal-agent problems,
and market failures.) And it seems like advertising
has similar trade-offs between
Measurability (How well can we measure this project's effect on sales?)
Message (Is it persuasive and on brand?)
Just as it's rational for software decision-makers to
choose simplicity, it can be rational for marketing
decsion-makers to choose measurability over signal
and message. (This is probably why there is a brand
crisis going on—short-term CMOs are better off
when they choose brand-unsafe tactics, sacrificing
As we're now figuring out how to use market-based
tools to fix market failures in software, where can
we use better market design to fix market failures
in advertising? Maybe this is where it actually
makes sense to use #blockchain: give people whose
decisions can affect #brandEquity some kind of
Actually, none of those three statements is true. And Facebook knows it.
The American Red Cross has given Facebook
this highly personal information about me,
by adding my contact info to an "American
Red Cross Blood Donors" Facebook Custom
If any of that stuff were true, I wouldn't have been
allowed to give blood.
When I heard back from the
American Red Cross about this personal data
they told me that they don't share my health
information with Facebook.
That's not how it works. I'm listed in the Custom Audience as a blood donor. Anyway, too late. Facebook has the info now.
So, which of its promises about how it uses people's personal information is Facebook going to break next?
And is some creepy tech bro right now making a
killer pitch to Paul Graham about a business plan to
"disrupt" the health insurance market using blood
Hugo-award-winning author Charles Stross said
that a corporation is some kind of sociopathic hive
but as far as I can tell a corporation is really
more like a monkey troop cosplaying a sociopathic
This is important to remember because, among
other reasons, it turns out that the money that a
corporation spends to support democracy and creative
comes from the same advertising budget
as the money it spends on random white power
and actual no-shit
The challenge for customers is to help people at
corporations who want to do the right thing with the
advertising budget, but need to be able to justify
it in terms that won't break character (since they
have agreed to pretend to be part of a sociopathic
hive organism that only cares about its stock price).
So this is a great opportunity to help people who
work for corporations and want to do the right thing.
Denying permission to share your info with Facebook
can move the advertising money that they spend to
reach you away from evil stuff and towards sites that
make something good. Here's a permission withdrawal
to cut and paste. Pull requests welcome.
I mentioned the signaling
that provides background for understanding the
targeted advertising problem. Besides being behind
paywalls, a lot of this material is written in math
that takes a while to figure out. For example,
it's worth working through this Gardete and Bart
to understand a situation in which the audience is
making the right move to ignore a targeted message,
but it can take a while.
Are people rational to ignore or block targeted
advertising in some media, because those
media are set up to give an incentive to deceptive
Here's a simulation of an ad market in which that
might be the case. Of course, this does not show
that in all advertising markets, better targeting
leads to an advantage for deceptive sellers. But it
is a demonstration that it is possible to design a
set of rules for an advertising market that gives an
advantage to deceptive sellers.
What are we looking at? Think of it as a culture
medium where we can grow and evolve a population of
The x and y coordinates are some arbitrary
characteristic of offers made to customers.
Customers, invisible, are scattered randomly all
over the map. If a customer gets an offer for a
product that is close enough to their preferences,
it will buy.
Advertisers (yellow to orange squares) get to place
ads that reach customers within a certain radius.
The advertiser has a price that it will bid for an ad
impression, and a maximum distance at which it will
bid for an impression. These are assigned randomly
when we populate the initial set of advertisers.
High-bidding advertisers are more orange, and
lower-bidding advertisers are more pale yellow.
An advertiser is either deceptive, in which case it
makes a slightly higher profit per sale, or honest.
When an honest advertiser makes a sale, we draw
a green line from the advertiser to the customer.
When a deceptive advertiser makes a sale, we draw a
red line. The lines appear to fade out because we
draw a black line every time there is an ad impression
that does not result in a sale.
So why don't the honest advertisers die out?
One more factor: the norms enforcers. You can think
of these as product reviewers or regulators. If a
deceptive advertiser wins an ad impression to a norms
enforcer, then the deceptive advertiser pays a cost,
greater than the profit from a sale. Think of it as
having to register a new domain and get a new logo.
Honest advertisers can make normal sales to the norms
enforcers, which are shown as blue squares. An ad
impression that results in an "enforcement penalty"
is shown as a blue line.
So, out of those relative simple rules—two kinds
of advertisers and two kinds of customers—we
can see several main strategies arise. Your run of
the simulation is unique, and you can also visit the
What I'm seeing on mine is some clusters of finely
targeted deceptive advertisers, in areas with
relatively few norms enforcers, and some low-bidding
honest advertisers with a relatively broad targeting
radius. Again, I don't think that this necessarily
corresponds to any real-world advertising market,
but it is interesting to figure out when and how an
advertising market can give an advantage to deceptive
sellers, and what kinds of protections on the customer
side can change the game.
Gardete and Bart "We find that when the
sender’s motives are transparent to the
receiver, communication can only be influential
if the sender is not well informed about the
receiver’s preferences. The sender prefers an
interior level of information quality, while the
receiver prefers complete privacy unless disclosure
is necessary to induce communication." Tailored
Cheap Talk | Stanford Graduate School of
BusinessThe Gardete and Bart paper makes
sense if you ever read Computer Shopper for the ads.
You want to get an idea of each manufacturer's support
for each hardware standard, so that you can buy parts
today that will keep their value in the parts market
of the near future. You don't want an ad that targets
you based on what you already have.
Kihlstrom and Riordan "A great deal of advertising appears to convey no direct credible
information about product qualities. Nevertheless such advertising
may indirectly signal quality if there exist market mechanisms that
produce a positive relationship between product quality and advertising
expenditures." Advertising as a Signal
What's next? The web advertising mess isn't a
snarled-up mess of collective action problems. It's
a complex set of problems that interact in a way
that creates some big opportunities for the right
projects. Work together to fix web ads? Let's
Rule number one of dealing with the big Internet
companies is: never complain to them about all the
they support. It's a waste of time and carpal
tunnels. All of the major Internet companies have
software, processes, and, most important, contract
to attenuate complaints. After all, if Big
Company employees came in to work and saw real user
screenshots of the beheading videos, or the child
abuse channel, or the ethnic cleansing memes, then
that would harsh their mellow and severely interfere
with their ability to, as they say in California,
bro down and crush code.
Fortunately, we have better options than engaging
with a process that's designed to mute a complaint.
Follow the money.
Your average Internet ad does not come from some
ominous all-seeing data-driven Panopticon. It's
probably placed by some marketing person looking at
an ad dashboard screen that's just as confusing
to them as the ad placement is confusing to you.
Contact a brand's marketing decision makers directly.
Briefly make a specific request.
Put your request in terms that make not granting it riskier and more time-consuming.
This should be pretty well known
by now. What's new is a change in
European privacy regulations. The famous European
applies not just to Europeans, but to natural
persons. So I'm going to test the idea that if I
ask for something specific and easy to do, it will be
easier for people to just do it, instead of having to
figure out that (1) they have a different policy for
people who they won't honor GDPR requests from and
(2) they can safely assign me to the non-GDPR group
and ignore me.
My simple request is not to include me in a
Facebook Custom Audience. I can find the
brands that are doing this by downloading ad data
from Facebook, and here's a letter-making web
that I can use. Try it if you like. I'll follow up
with how it's going.
If Ron Estes, running for US
a candidate with the same name as a well-known
Democratic Party politician, clearly the right-wing
pranksters of the USA would give him a bunch of
inbound links just for lulz, and to force the
better-known politician to spend money on SEO of
But he's not, so people will probably just tweet
about the election and stuff.
I know I haven't posted for a while, but I can't skip
Day. You don't see a lot of personal info from me
here on this blog. But just for once, I'm going to
I'm a blood donor.
This doesn't seem like a lot of information. People
sign up for blood drives all the time.
But the serious privacy problem here is that
when I give blood, they also test me for a
lot of diseases, many of which could have a
big impact on my life and how much of certain
kinds of healthcare products and services I'm
likely to need. The fact that I'm a blood donor
might also help people infer something about my sex
but the health data is TMI already.
In today's marketing scene, the fact that my blood
donor information leaked to Facebook isn't too
surprising. The Red Cross clearly has some marketing
people, and targeting the existing contact list on
Facebook is just one of the things that marketing
people do without thinking about it too much.Not thinking about privacy concerns is a
problem for Marketing as a career field long-term. If
everyone thinks of Marketing as the Department of
Creepy Stuff it's going to be harder to recruit
The problem is that my control
over my personal data isn't just a
problem for me. As Prof. Arvind Narayanan
(video), Poor privacy harms society as
a whole. Can I trust Facebook to use
my blood info just to target me for the Red
Cross, and not to sort people by health for
other purposes? Of course not. Facebook has
crossed every creepy line that they have promised not
To be fair, that's not just a Facebook thing.
Tech bros do risky and mean things all the
time without really thinking them through,
and even when they do set appropriate defaults
they half-ass the implementation and shit
Will blood donor status get you better deals, or
apartments, or jobs, in the future? I don't know.
I do know that the Red Cross made a big point
about confidentiality when they got me signed up.
I'm waiting for a reply from the Red Cross privacy
officer about this, and will post an update.
Doc Searls is optimistic that
surveillance marketing is going
but what's going to replace it? One idea that
keeps coming up is the suggestion that prospective
buyers should be able to sell purchase intent data to
vendors directly. This seems to be appealing because
it means that the Marketing department will still get
to have Big Data and stuff, but I'm still trying to
figure out how voluntary transactions in intent data
could even be a thing.
Here's an example. It's the week before Thanksgiving,
and I'm shopping for a kitchen stove. Here are
two possible pieces of intent information that I
"I'm cutting through the store on the way to buy
something else. If a stove is on sale, I might
buy it, but only if it's a bargain, because who
needs the hassle of handling a stove delivery the
week before Thanksgiving?"
"My old stove is shot, and I need one right
away because I have already invited people
over. Shut up and take my money."
On a future intent trading platform, what's my
incentive to reveal which intent is the true one?
If I'm a bargain hunter, I'm willing to sell my intent
information, because it would tend to get me a lower
price. But in that case, why would any store want to
buy the information?
If I need the product now, I would only sell the
information for a price higher than the expected
difference between the price I would pay and the price
a bargain hunter would pay. But if the information
isn't worth more than the price difference, why
would the store want to buy it?
So how can a market for purchase intent data happen?
Or is the idea of selling access to purchase intent
only feasible if the intent data is taken from the
"data subject" without permission?
Anyway, I can see how search advertising and
signal-based advertising can assume a more important
role as surveillance marketing becomes less important,
but I'm not sure about markets for purchase intent.
Maybe user data sharing will be not so much a
stand-alone thing but a role for trustworthy news and
cultural sites, as people choose to share data as
part of commenting and survey completion, and that
data, in aggregated form, becomes part of a site's
It would make me really happy to be able to
Google web ads in Privacy Badger.
(Yellow-listed domains are not blocked, but
have their cookies restricted in order to cut
back on cross-site tracking.) That's because
a lot of news and cultural sites use DoubleClick for
and other Google services to deliver legit,
context-based advertising. Unfortunately, as
far as I can tell, Google mixes in-context
ads with crappy, spam-like, targeted
stuff. What I want is something like Doc Searls
ads: Just give me ads not based on tracking me.
Google doesn't appear to have their European
mode activated yet, so I added a do-nothing "European
to the Aloodo project, for testing. I'm not able to
yellow-list Google yet, but when GDPR takes effect
later this month I'll test it some more.
In the meantime, I'll keep looking for other examples
of hidden European mode, and see if I can figure out
how to activate them.
Lots of GDPR advice out there. As far as I can tell it pretty much falls into three categories.
Play it straight and handle user consent correctly.
Good part: you end up with less personal data, but what you do have is better quality and you clearly know what data you
can use for what purposes. Bad part: UX gets annoying because users have to fill out a bunch of web forms.
Cut back on surveillance marketing.
Good part: better for brand equity in the long run.
All advertising is brand advertising. Some of it is just brand
advertising in the wrong direction.
Bad part: what long run? CMO is a short-term job, and surveillance marketing projects get budgets for a reason. Strip-mining
brand equity is a short-term win.
Add microformats to label consent forms as consent forms, and appropriate links
to the data usage policy to which the user is being asked to agree.
Release a browser extension that will do the right thing with the consent forms, and submit automatically if the
user is fine with the data usage request and policy, and appears to trust the site. Lots of options here, since the
extension can keep track of known data usage policies and which sites the user appears to trust, based on their
Publish user research results from the browser extension. At this point the browsers can compete to do their own versions
of step 3, in order to give their users a more trustworthy and less annoying experience.
Browsers need to differentiate in order to attract
new users and keep existing users. Right now a good
way to do that is in creating a safer-feeling, more
trustworthy environment. The big opportunity is in
seeing the overlap between that goal for the browser
and the needs of brands to build reputation and the
needs of high-reputation publishers to shift web
advertising from a hacking game that adtech/adfraud
wins now, to a reputation game where trusted sites
Why does the Peak Advertising
effect occur most in the most
accurately targeted ad media? Why do people tend
to filter out targeted ads, using habit power,
technology, and regulation, while paying more
attention to less finely targeted ad media?
One explanation is that
buying ad space is an example of costly
On this view, advertising is basically an exchange
of signal for attention, and ads that don't pay
their way with some kind of proof of spend are
not worth paying attention to because they don't convey
useful information about the seller's beliefs on
how valuable the audience would find the product.
Another possible explanation is that targetable ad
media are more suitable for deception, and that where
advertisers bid for space in a medium, deceptive
advertisers will tend to outbid the honest ones.
This seems counterintuitive, since we might
suppose that the customer lifetime value of an
honest seller's newly acquired customer could in
many cases be greater than the profit from a quick
score by a deceptive seller. But targeting doesn't
just match ad impressions with prospective buyers.
When used by a deceptive seller, it can also conceal
an ad impression from potentially costly attention.
For honest direct marketers, the expected profit from
reaching a buyer is positive, and the expected profit
from reaching a non-buyer is zero. But the audience
does not just contain buyers and non-buyers. People
can also be divided into enforcers and non-enforcers.
Enforcers can be anything from professional law
enforcement people, to someone who takes apart a
bogus product and makes a video about it, to just
the writer of a bad online review. What enforcers
have in common is that for a dishonest seller, the
expected profit from reaching an enforcer is negative.
Some kinds of enforcer can impose costs even
without buying. For example, a reader might send
the publisher a screenshot containing a scam ad and
get the advertiser added to an advertiser exclusion
Other kinds of enforcer might only take action
if they buy the product and find it to be a scam.
A deceptive advertiser might incur costs when their
ad is shown to either kind of enforcer.
For the honest advertiser, the expected profit from
a single impression is:
probability of reaching a buyer × expected profit per sale
For the dishonest advertiser, the expected profit is:
probability of reaching a buyer × expected profit per sale − probability of reaching an enforcer × expected loss per enforcer
The expected loss per enforcer is typically high
compared to the profit per sale. For example, a small
number of contacts with review writers might require
a seller to re-launch under a new name. In an ad
impression market with both honest and deceptive
sellers, where sellers can choose which impressions
to bid on, an ad impression that a deceptive seller
believes is unlikely to reach an enforcer has extra
value to that deceptive advertiser but not to an
honest advertiser. Deceptive sellers will tend to
outbid honest ones for certain impressions.
A member of the audience might be able to see
targeting criteria, but not the advertiser's internal
weighting of targeting criteria. (For example,
a targeted ad platform might reveal to you that
you are being targeted for an ad because your
computer is running the latest release of the OS.
What they won't tell you is that the seller is
bidding on impressions to your OS version because
they're selling a tainted nutritional supplement,
and the lead testing department at the Ministry of
Health is still on the old OS version.)
So, some ad impressions will tend to be purchased
by deceptive sellers, but a low-information member
of the audience can't tell which impressions those are.
Is this an ad from an honest seller that might be
reaching both me and enforcers, or is this an ad
from a dishonest seller targeted to reach me but not
enforcers? When you read a magazine that reaches a
community of practice of which you're a member, you
can be confident that product reviewers and editors
are seeing the same ads you are. A web ad could be
targeted to avoid experienced and better-connected
members of the community of practice.
One possible explanation for the Peak Advertising
effect is the interaction between deceptive sellers
discovering how to use a new ad medium's targeting
capabilities to avoid enforcers, and the audience
discovering the fraction of deceptive sellers.
by David Dayen in The New Republic.
(I'm not so much interested in whether or not targeted
advertising should be banned as I am in the reasoning
behind why people choose to protect themselves from
it. The story of matching the exact right buyer to
the exact right product is much less compelling for
most purchase decisions than the buyer's story of
finding an adequate product and avoiding deceptive
Post-creepy web ad sightings: What's next for web advertising after browser privacy
improvements and regulatory changes make conventional
adtech harder and harder?
The answer is probably something similar to
what's already starting to pop up on niche sites.
Here's a list of ad platforms that work more like
print, less like spam: list of post-creepy web ad
Comments and suggestions welcome (mail me, or do a
GitHub pull request from the link at the bottom.)
Good question on Twitter, but one that might take
more than, what is is now, 280 characters? to answer.
Sir, why do you pay so much attention on internet advertising? I have hardly read your tweet that isn't related to internet advertising. I used Privacy Badger for some time last year. It's useful but a little heavy😅
Why do I pay attention to Internet advertising?
Why not just block it and forget about it? By now,
web ad revenue per user is so small that it only makes
sense if you're running a platform with billions of
users, so sites are busy figuring out other ways to
get paid anyway.
To the generation that never had a print
magazine subscription, advertising is
just a subset of "creepy shit on the
Internet." Who wants to do that for a living?
According to Charlotte Rogers at Marketing Week,
the lack of information out there explaining the
diverse opportunities of a career in marketing puts
the industry at a distinct disadvantage in the minds
of young people. Marketing also has to contend with
a perception problem among the younger generation
that it is intrinsically linked with advertising,
which Generation Z notoriously either distrust or
The answer is that I'm interested in Internet
advertising for two reasons.
First, because I'm a Kurt Vonnegut fan and have
worked for a magazine.
Some kinds of advertising can have positive
externalities. Vonnegut was able to quit his
job at a car dealership, and write full time,
because advertising paid for original fiction
in Collier's magazine. How did
advertising lose its ability to pay for news and
cultural works? Can advertising reclaim that
Maybe make that three reasons. As long as Internet
advertising fails to pull its weight in either
supporting news and cultural works or helping
to send a credible economic signal for brands
then the scams, malware and mental manipulation will
only continue. More:World's last web advertising optimist tells all!
We know that advertising on the web has reached a
low point of fraud, security risks, and lack of brand
safety. And it's not making much money for publishers
anyway. So a lot of people are talking about how to
fix it, by building a new user data sharing system,
in which individuals are in control of which data
they choose to reveal to which companies.
Unlike today's surveillance marketing, people wouldn't
be targeted for advertising based on data that someone
figures out about them and that they might not choose
A big win here will be that the new system would
tend to lower the ROI on creepy marketing investments
that have harmful side effects such as identity theft
and facilitation of state-sponsored misinformation,
and increase the ROI for funding ad-supported sites
that people trust and choose to share personal
A user-permissioned data sharing system is an
excellent goal with the potential to help clean up
a lot of the Internet's problems. But I have to be
realistic about it. Adam Smith once wrote,
The pride of man makes him love
to domineer, and nothing mortifies him so much
as to be obliged to condescend to persuade his
So the big question is still:
Why would buyers of user data choose to deal with
users (or publishers who hold data with the user's
permission) when they can just take the data from
users, using existing surveillance marketing firms?
Some possible answers.
GDPR? Unfortunately, regulatory capture
is still a thing even in Europe. Sometimes I wish
that American privacy nerds would quit pretending
that Europe is ruled by Galadriel or something.
brand safety problems? Maybe a little around
the edges when a particularly bad video gets
super viral. But platforms and adtech can easily
hide brand-unsafe "dark" material from
marketers, who can even spend time on Youtube
and Facebook without ever developing a clue about
how brand-unsafe they are for regular people.
Even as news-gatherers get better at finding the
worst stuff, platforms will always make hiding
brand-unsafe content a high priority.
fraud concerns? Now we're getting
somewhere. Fraud hackers are good at making
realistic user data. Even "people-based"
platforms mysteriously have more users in desirable
geography/demography combinations than are actually
there according to the census data. So, where
can user-permissioned data be a fraud solution?
signaling? The brand equity math must be out
there somewhere, but it's nowhere near as widely
known as the direct response math that backs
up the creepy stuff. Maybe some researcher at
one of the big brand advertisers developed the
math internally in the 1980s but it got shredded
when the person retired. Big possible future win
for the right behavioral economist at the right
agency, but not in the short term.
improvements in client-side privacy? Another
good one. Email spam filtering went from obscure
nerdery to mainstream checklist feature
quickly—because email services competed on
it. Right now the web browser is a generic product,
and browser makers need to differentiate. One
promising angle is for the browser to help
build a feeling of safety in the user by
reducing user-perceived creepiness, and the
browser's need to compete on this is aligned
with the interests of trustworthy sites and with
user-permissioned data sharing.
With the rise of fake news and revelations about how the Russians used social platforms to influence both the US election and EU referendum, the need for change is pressing, both for the platforms and for the advertisers that support them.
For the call to action to work, Unilever really needs other brands to rally round but these have so far been few and far between.
Other brands? Why?
If brands are worth anything, they can at least help
people tell one product apart from another.
Saying that other brands need to participate in
saving Unilever's brands from the three-ring shitshow
of brand-unsafe advertising is like saying that
Volkswagen really needs other brands to get into
simple layouts and natural-sounding copy just because
Volkswagen's agency did.
Not everybody has to make the same stuff and sell it
the same way. Brands being different from each other
is a good thing. (Right?)
Sometimes a problem on the Internet isn't a "let's
all work together" kind of problem. Sometimes it's an
opportunity for one brand to get out ahead of another.
What if every brand in a category kept on playing in
the trash fire except one?
(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)
A big objection to tracking protection is the idea
that the tracker will always get through. Some people
suggest that as browsers give users more ability to
control how their personal information gets leaked
across sites, things won't get better for users,
because third-party tracking will just keep up.
On this view, today's easy-to-block third-party
cookies will be replaced by techniques such as passive
fingerprinting where it's hard to tell if the browser
is succeeding at protecting the user or not, and
users will be stuck in the same place they are now,
I doubt this is the case because we're playing a
more complex game than just trackers vs. users.
The game has at least five sides, and some of the
fastest-moving players with the best understanding of
the game are the adfraud hackers. Right now adfraud
is losing in some areas where they had been winning,
and the resulting shift in adfraud is likely to shift
the risks and rewards of tracking techniques.
Data center adfraud
Fraudbots, running in data centers, visit legit sites
(with third-party ads and trackers) to pick up a
realistic set of third-party cookies to make them look
like high-value users. Then the bots visit dedicated
fraudulent "cash out" sites (whose operators have the same
third-party ads and trackers) to generate valuable ad
impressions for those sites. If you wonder why so
many sites made a big deal out of "pivot to video"
but can't remember watching a video ad, this is
why. Fraudbots are patient enough to get profiled as,
say, a car buyer, and watch those big-money ads. And
the money is good enough to motivate fraud hackers to
make good bots, usually based on real browser code.
When a fraudbot network gets caught and blocked from
high-value ads, it gets recycled for lower and lower
value forms of advertising. By the time you see
traffic for sale on fraud boards, those bots are
probably only getting past just enough third-party
anti-fraud services to be worth running.
This version of adfraud has minimal impact on
real users. Real users don't go to fraud sites,
and fraudbots do their thing in data centers Doesn't everyone do their Christmas
shopping while chilling out in the cold aisle at an
Amazon AWS data center? Seems legit to me.
and don't touch users' systems. The companies that
pay for it are legit publishers, who not only have
to serve pages to fraudbots—remember, a bot
needs to visit enough legit sites to look like a real
user—but also end up competing with adfraud for
ad revenue. Adfraud has only really been a problem
for legit publishers. The adtech business is fine
with it, since they make more money from fraud than
the fraud hackers do, and the advertisers are fine
with it because fraud is priced in, so they pay the
fraud-adjusted price even for real impressions.
What's new for adfraud
So what's changing? More fraudbots in data centers
are getting caught, just because the adtech firms
have mostly been shamed into filtering out the
embarassingly obvious traffic from IP addresses
that everyone can tell probably don't have a
human user on them. So where is fraud going now?
More fraud is likely to move to a place where a
bot can look more realistic but probably not stay
up as long—your computer or mobile device.
Expect adfraud concealed within web pages, as a
payload for malware, and of course in lots and lots
of cheesy native mobile apps.The
Google Play Store has an ongoing problem with adfraud,
which is content marketing gold for Check Point
if you like "shitty app did WHAT?" stories.
Adfraud makes way more money than cryptocurrency
mining, using less CPU and battery.
So the bad news is that you're going to have
to reformat your uncle's computer a lot this
year, because more client-side fraud is coming.
Data center IPs don't get by the ad networks as well
as they once did, so adfraud is getting personal.
The good news, is, hey, you know all that big, scary
passive fingerprinting that's supposed to become the
harder-to-beat replacement for the third-party cookie?
Client-side fraud has to beat it in order to get paid,
so they'll beat it. As a bonus, client-side bots are
way better at attribution fraud (where a fraudulent
ad gets credit for a real sale) than data center bots.
Advertisers have two possible responses to adfraud:
either try to out-hack it, or join the "flight to
quality" and cut back on trying to follow big-money
users to low-reputation sites in the first place.
Hard-to-detect client-side bots, by making creepy
fingerprinting techniques less trustworthy, tend to
increase the uncertainty of the hacking option and
make flight to quality relatively more attractive.
What if I told you that there was an Internet ad
can reach the same user on mobile and desktop
uses open-standard persistent identifiers for users
can connect users to their purchase history
reaches the users that the advertiser chooses, at the time the advertiser chooses
and doesn't depend on the Google/Facebook duopoly?
Don't go looking for it on the Lumascape.
I'm describing email spam.
Every feature that adtech is bragging on, or working
toward? Email spam had it in the 1990s.
So why didn't brand advertisers jump all over spam? Why did they mostly leave it to low-reputation brands and scammers?
To be honest, it probably wasn't a decision
decision in most cases, just corporate sloth. But
staying away from spam was the right answer. In the
email inbox, spam from a high-reputation brand doesn't
look any different from spam that any fly-by-night
operation can send. All spammers can do the same stuff:
They can sell to people...for a fraction of what marketing used to cost. And they can collect data on these consumers, track what they buy, what they love and hate about the experience, and market to them directly much more effectively.
It's the direct consumer relationships, and the use of consumer data, that is completely game-changing for the marketing world. And most big marketers, such as Procter & Gamble and Unilever, are not ready for this new reality, the IAB says.
But of course they're ready. The difference is that
those established brand advertisers aren't any more
ready than some guy who watched a YouTube video
series on "growth hacking" and is ready to start buying
targeted ads and drop-shipping.
The "new reality," the targeted advertising business
that the IAB wants brands to join them in, is a place
where you win based not on how much the audience
trusts you, but on how well you can out-hack the
competition. And like any information space organized
by hacking skill, it's a hellscape of deceptive crap.
Read The Strange Brands in Your Instagram Feed by Alexis C. Madrigal.
Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.
Of course, not every brand that buys a social media
ad or other targeted ad is crap.
Many billions of pounds of advertising expenditure have been shifted from conventional media, most notably newspapers, and moved into digital media in a quest for targeted efficiency. If advertising simply works by the conveyance of messages, this would be a sensible thing to do. However, it is beginning to become apparent that not all, perhaps not even most, advertising works this way. It seems that a large part of advertising creates trust and conviction in its audience precisely because it is perceived to be costly.
If anyone knows that any seller can watch a few
YouTube videos and do a certain activity, does that
activity really help the audience distinguish a
high-reputation seller from a low-reputation one?
And how does it affect a legit brand when its ads
show up on the same medium with all the crappy ones?Twitter has a solution that keeps its ads saleable: just don't show any ads to important people. I'm surprised they can get away with this, but given the mix of rip-off and real brand ads I keep seeing there, it seems to be working.
Extremists and state-sponsored misinformation
campaigns aren't "abusing" targeted advertising.
They're just taking advantage of a system optimized
for deception and using it normally.
Now, I don't want to blame targeted advertising
for all of the problems of brand equity. When you
put high-fructose corn syrup in your product, brand
equity suffers. When you outsource or de-skill the
customer support function, brand equity suffers.
All the half-ass "looks good this quarter" stuff that
established brands are doing is bad for brand equity.
It just turns out that the kinds of advertising
that you can do on the Internet today are all
half-ass "looks good this quarter" stuff. If you want
to send a credible economic signal, buy TV time or
put a flagship store on some expensive real estate.
The Internet's got nothing for you.
Failure to create signal-carrying ad units
should be more of a concern for people
who want to earn ad money on the Internet
than it is. See Bob Hoffman's "refrigerator
All that work that went into building the most
complicated ad medium ever? It went into building an
ad medium optimized for low-reputation advertisers.
And that kind of ad medium tends to see rates go down
over time. It doesn't hold value.
And the medium can't gain value until the users trust
it, which means they have to trust the browser.
In-browser tracking protection is going to have
to enable the legit web advertising industry the
same way that spam filters enables the legit email
I am sure Google, Facebook and lesser purveyors of advertising online will find less icky ways to stay in business; but it is becoming clear that next May 25, when the GDPR goes into full effect, will be an extinction-level event for tracking-based advertising (aka adtech) as a business model.
Personally, I'm not buying either one of these GDPR
visions. Because, just for fun and also because
reasons, I run my own mail server.
And every little decision I have to make about how
to configure the damn thing is based on playing a
game with email spammers. Regulation is a part of
my complete breakfast, but it's not the whole story.
The government doesn't give you freedom from spam.
You have to take it for yourself, one filtering rule
at a time. Or, do what most people do, and find
a company that does it for you, but it has to be a
company that you trust with your information.
A mail sender's decision to comply, or not comply,
with some regulation is a bit of information.
That feeds into the software that makes the final
decision: inbox, spam folder, or reject. When a spam
message complies with the regulations of some country,
my mail server doesn't say, "Oh, wow, compliant! I can
skip all the other checks and send this one straight
to the inbox!" It uses the regulation compliance
along with other information to make that decision.
So whatever extra consent forms that surveillance
marketers are required to send by GDPR? They're
not the final decision on What The User Must See.
They're just data, coming over the network.
Some of that data will be interpreted to mean that
this request is an obvious mismatch with how the user
chooses to share their info. The user might not even
see those consent forms, or the browser might pop up
4 requests to do creepy shit, that's obviously against your preferences, already denied. Isn't this the best browser ever?
(No, I don't write copy for browser notifications.
But you get the idea.)
Browsers that implement tracking protection might
end up with a feature where they detect requests for
permission to do things that the user has already
said no to—by turning on tracking protection
in the first place—and auto-deny them.
Legit email senders had to learn "deliverability,"
the art and science of making legit mail look
legit so that it can get past email spam filters.
Legit advertisers will have to learn that users
aren't identical and spherical, users choose tools
to implement their data sharing preferences, and that
regulatory compliance is only part of the job.
Let's run a technical challenge on the Internet.
Team A vs. Team B.
Team A gets to work where they want, when they want.
Team B has to work in an open-plan office, with
people walking behind them, talking on the phone,
doing all that annoying office stuff.
Members of Team A get paid for successful work
within weeks or months. Members of Team B get a
base salary that they have to spend on rent in an
expensive location, but just might get paid extra
for successful work in four years.
Team A will let anyone try to join, and those who
aren't successful have to drop out quickly. Team B
will only let members who are a "good cultural
fit" join, and it takes a while to get rid of an
Team A can deploy unproven work for real-world
testing, using infrastructure that they get for free
on the Internet. Team B can only deploy their work
when production-ready, on infrastructure they have
to pay for.
If Team A breaks the rules, the penalty is that
they have to spend a little money to register new
domain names. If Team B breaks the rules, they risk
lengthy regulatory and/or legal consequences.
Team A scores a win any time they can beat whoever
is the weakest member of Team B at that time. Team B
can only score a win when they can consistently defeat
all of the most active members of Team A.
Team A is adfraud.
Why is so much marketing money being bet on Team B?
The IAB road blocked the W3C Do Not Track initiative in 2012 that was led by a cross functional group that most importantly included the browser makers. In hindsight this was the only real chance for the industry to solve consumer needs around data privacy and advertising technology. The IAB wanted self-regulation. In the end, DNT died as the IAB hoped.
As third-party tracking made the ad experience
crappier and crappier, browser makers tried to play
nice. Browser makers tried to work in the open and
That didn't work, which shouldn't be a surprise.
Imagine if email providers had decided to
build consensus with spammers about spam
filtering rules. The spammers would have
been all like, "It replaces the principle of
consumer choice with an arrogant 'Hotmail knows best'
Any sensible email provider would ignore the spammers
but listen to deliverability concerns from senders of
legit opt-in newsletters. Spammers depend on sneaking
around the user's intent to get their stuff through,
so email providers that want to get and keep users
should stay on the user's side. Fortunately for legit
mail senders and recipients, that's what happened.
And now Google is doing their own thing.
Some positive parts about it, but by
focusing on filtering annoying types of
ad units they're closer to the Adblock Plus
"Acceptable Ads" racket than to a real solution.
So it's better to let Ben Williams at Adblock
explain that one. I still don't get how it is that
so many otherwise capable people come up with "let's
filter superficial annoyances and not fundamental
issues" and "let's shake down legit publishers
for cash" as solutions to the web advertising
problem, though. Especially when $16 billion in
is just sitting there. It's almost as if the
Lumascape doesn't care about fraud because it's
priced in so it comes out of the publisher's share
The web advertising problem looks big, but I want to think positive about it.
billions of web users
visiting hundreds of web sites
with tens of third-party trackers per site.
That's trillions of opportunities for tiny victories
Right now most browsers and most fraudbots
are hard to tell apart. Both maintain
a single "cookie jar" across trusted and
untrusted sites, and both are subject to
For fraudbots, cross-site trackability is a feature. A
fraudbot can only produce valuable ad impressions
on a fraud site if it is somehow trackable from a
For browsers, cross-site trackability is a bug, for two reasons.
Leaking activity from one context to another violates widely held user norms.
Because users enjoy ad-supported content, it is in the interest of users
to reduce the fraction of ad budgets that go to fraud and intermediaries.
Browsers don't have the solve the whole web
advertising problem to make a meaningful difference.
As soon as a trustworthy site's real users look
diffferent enough from fraudbots, because fraudbots
make themselves more trackable than users running
tracking-protected browsers do, then low-reputation
and fraud sites claiming to offer the same audience
will have a harder and harder time trying to sell
impressions to agencies that can see it's not the
Of course, the browser market share numbers will
still over-represent any undetected fraudbots and
under-represent the "conscious chooser" users who
choose to turn on extra tracking protection options.
But that's an opportunity for creative ad agencies
that can buy underpriced post-creepy ad impressions
and stay away from overvalued or worthless bot
impressions. I expect that data on who has legit
users—made more accurate by including tracking
protection measurements—will be proprietary
to certain agencies and brands that are going after
customer segments with high tracking protection
adoption, at least for a while.
There's enough bullshit on the
Internet already, but I'm afraid
I'm going to quote some more. This time from Ilyse
The reality is none of us can say with certainty that anywhere in the world, we are [brand] safe. Look what just happened with YouTube. They are working on fixing it, but even Facebook and Google themselves have said there’s not much they can do about it. I mean, it’s hard. It’s not black and white. We are putting a lot of money in it, and pull back on channels where we have concerns. We’ve had good talks with the YouTube teams.
One important part of this decision is black and white.
If Nazis are better at "programmatic" than the
resting-and-vesting chill bros at the programmatic
ad firms (and, face it, Nazis kick ass at
programmatic), then the choice to spend ad money in a
way is a choice that puts your brand on the wrong
side of a black and white line.
There are plenty of Nazi-free places for brands to
run ads. They might not be the cheapest. But I know
which side of the line I buy from.
Remove all the tracking widgets from your site. That Facebook “Like” button only serves to exfiltrate your valuable data to an entity that doesn’t have your best interests at heart. If you’ve got a valuable audience, why would you want to help the ad tech industry which promises “I can find the same and bigger audience over here for $2 CPM, so don’t buy from the publisher?” Sticking your own head in the noose is never a good idea.
That advice makes sense for the Facebook "like
button." That button is just a data shoplifter.
The others, though? All those extra trackers come
in as side effects of ad deals, and they're likely
to be contractually required to make ads on the
Yes, those trackers feed bots and data leakage,
and yes, they're even terrible at fighting adfraud.
Augustine Fou points out that Fraud filters don't
"In some cases it's worse when filter is on."
So in an ideal world you would be able to pull all
the third-party trackers, but as far as day-to-day
operations go, user tracking is a Chesterton's Fence
What happens if a legit site unilaterally takes
down the third-party trackers? All the targeted ad
impressions that would have given that site a (small)
payment end up going to bots.
So what can a site do? Understand that the real fix has to happen on the browser end, and
nudge the users to either make their browsers less data-leaky, or switch to browsers that
are leakage-resistant out of the box.
Start A/B testing some notifications to remind users to turn on tracking protection.
Can you get users who are already choosing "Do Not Track" to turn on real protection
if you inform them that sites ignore their DNT choice?
If a user is running an ad blocker with a paid whitelisting
can you inform them about it to get them to
switch to a better tool, or at least add a second
layer of protection that limits the damage that paid whitelisting can do?
When users visit privacy pages or opt-out of a marketing program, are they also
willing to check their browser privacy settings?
Every site's audience is different. It's hard to
know in advance how users will respond to different
calls to action to turn up their privacy and create
a win-win for legit sites and legit brands. We do
know that users are concerned and confused about
web advertising, and the good news is that the
nudges is as
easy to add as yet another tracker.
If you're responsible for a brand and somewhere in
the mysterious tubes of adtech your money is finding
its way to Nazis, what is the right course of action?
One wrong answer is to write a "please help me" letter
to a company that will just ignore it. That's just
admitting to knowingly sending money to Nazis, which
is clearly wrong.
Here's another wrong idea, from
the upcoming IAB Annual Leadership
session on "brand safety" (which is the nice, sanitary
professional-sounding term for "trying not to sponsor
Nazis, but not too hard.")
Threats to brand safety arise internally and externally, in your control and out of your control—and the stakes have never been higher. Learn how to minimize brand safety risks and maximize odds of survival when your brand takes a hit (spoiler alert: overreacting is as bad as underreacting). Best Buy and Starcom share best practices based on real-world encounters with brand safety issues.
Really, people? Overreacting is as bad as
underreacting? The IAB wants you to come to a
deluxe conference about how it's fine to send a few
bucks to Nazis here and there as long as it keeps
their whole adtech/adfraud gravy train running on time.
I disagree. If Best Buy is fine with (indirectly of
course) paying the occasional Nazi so that the IAB
companies can keep sending them valuable eyeballs
from the cheapest possible sites, then I can shop
Any nationalist extremist movement has its obvious
supporters, who wear the outfits and get the tattoos
and go march in the streets and all that stuff,
and also the quiet supporters, who come up with
the money and make nice with the powers that be.
The supporters who can keep it deniable.
Can I, as a potential customer from the outside,
tell the difference between quiet Nazi supporters and
people who are just bad at online advertising and end
up supporting Nazis by mistake? Of course not. Do I
care? Of course not. If you're not willing to put the
basic "don't pay Nazis to do Nazi stuff" rule ahead
of a few ad clicks, I don't want your brand anyway.
And I'll make sure to install and use the tracking
protection tools that help keep my good data
away from bad sites.
In 2018, we’ll see the rapid decline of “place-ism,” the discrimination against people who aren’t in a central office. Technology is making it easier not just to communicate with distant colleagues about work, but to have the personal interactions with them that are the foundation of trust, teamwork, and friendship.
Really, "place-ism" only works if you can afford to
overpay the workers who are themselves overpaying
for housing. And management can only afford to
overpay the workers by giving in to the temptations
of rent-seeking and deception. So the landlord
makes the nerd pay too much, the manager has
to pay the nerd too much, and you end up with,
like the man said, ["debts that no honest man can
Open source business news: Docker, Inc is
to see this as a run-of-the-mill open source business
failure story. But at another level, it's the story
of how the existing open source incumbents used open
to avoid having to bid against each other for an
I love "nopoly controls
entire industry so there is no point in
it any more" stories: The Digital Advertising
Good news on advertising. The
Millennials are burned out on
advertising—most of what they're exposed to
is just another variant of "creepy annoying shit on
the Internet"—but the generation after the
Millennials are going to have hella mega opportunities
building the next Creative Revolution.
Bitcoin to the moooon: The futures market is
starting up, so here comes a bunch more day trader
action. More important, think about all the bucket
shops (I even saw an "invest in Bitcoin without
owning Bitcoin" ad on public transit in London),
legit financial firms, Libertarian true believers,
and coins lost forever because of human error.
Central bankers had better keep an
eye on Bitcoin, though. Last recession we saw that
printing money doesn't work as well as it used to,
because it ends up in the hands of rich people who,
instead of priming economic pumps with it, just drive
up the prices of assets. I would predict "Entire
Round of Quantitative Easing Gets Invested in Bitcoin
Without Creating a Single New Job" but I'm saving that
one for 2019. Central banks will need to innovate.
Federal Reserve car crushers? Relieve medical deby by
letting the UK operate NHS clinics at their consulates
in the USA, and we trade them US green cards for visas
that allow US citizens to get treated there?
And—this is a brilliant quality of Bitcoin
that I recognized too late—there is no bad
news that could credibly hurt the value of a purely
The lesson for regular people here is not so much what
to do with Bitcoin, but remember to keep putting some
well-considered time into actions that you predict
have unlikely but large and favorable outcomes.
Must remember to do more of this.
High-profile Bitcoin kidnapping in the USA ends in tragedy:
Kidnappers underestimate the amount of Bitcoin
actually available to change hands, ask for more
than the victim's family (or fans? a crowdsourced
kidnapping of a celebrity is now a possibility) can
raise in time. Huge news but not big enough to slow
down something that the finance scene has already
Tech industry reputation problems hit open
source. California Internet douchebags talk
like a positive social movement but act like East
Coast vampire squid—and people are finally
not so much letting them define the terms of the
The real Internet economy is moving to a three-class
system: plutocrats, well-paid brogrammers with Aeron
chairs, free snacks and good health insurance,
and everyone else in the algorithmically-managed
precariat. So far, people are more concerned about
the big social and surveillance marketing companies,
but open source has some of the same issues.
Just as it was widely considered silly for people
to call Facebook users "the Facebook community" in
2017, some of the "community" talk about open source
will be questioned in 2018. Who's working for who,
and who's vulnerable to the risks of doing work that
someone else extracts the value of? College athletes
are ahead of the open source scene on this one.
Adfraud becomes a significant problem for end
users: Powerful botnets in data centers drove
the pivot to video.
Now that video adfraud is
well-known, more of the fraud hackers will move
to attribution fraud. This ties in to adtech
consolidation, too. Google is better at beating
simple to midrange fraud than the rest of the
Lumascape, so the steady progress towards a two-logo
Lumascape means fewer opportunities for bots in
Attribution fraud is nastier than
servers-talking-to-servers fraud, since it usually
depends on having fraudulent and legit client software
on the same system—legit to be used for a
human purchase, fraudulent to "serve the ad" that
takes credit for it. Unlike botnets that can run in
data centers, attribution fraud comes home with you.
Yeech. Browsers and privacy tools will need to level
up from blocking relatively simple Lumascape trackers
to blocking cleverer, more aggressive attribution fraud scripts.
Wannabe fascists keep control of the US Congress,
because your Marketing budget: "Dark" social campaigns (both ads and fake "organic" activity) are still a thing.
In the USA, voter suppression and gerrymandering have been cleverly
enough done that social manipulation can still make a difference, and it will.
In the long run, dark social will get filtered out
by habits, technology, norms, and regulation—like junk fax and email spam before it—but we don't have a "long run" between now and November 2018.
The only people who could make an impact on dark social now are the legit advertisers who don't want their brands associated with this stuff.
And right now the expectations to advertise on the major social sites are
stronger than anybody's ability to get an edgy, controversial "let's not SPONSOR ACTUAL F-----G NAZIS" plan
through the 2018 marketing budget process.
Yes, the idea of not spending marketing money on supporting nationalist extremist forums is new and different now. What a year.
Short puzzle relevant to some diversity and inclusion
threads that encourage people to share salary info.
(I should tag this as "citation needed" because I
don't remember where I heard it.)
Alice, Bob, Carlos, and Dave all want to know the
average salary of the four, but none wants to reveal
their individual salary. How can the four of them
work together to determine the average? Answer below.
Alice generates a random number, adds it to her
salary, and gives the sum to Bob.
Bob adds his salary and gives the sum to Carlos.
Carlos adds his salary and gives the sum to Dave.
Dave adds his salary and gives the sum to Alice.
Alice subtracts her original random number, divides by
the number of participants, and announces the average.
No participant had to share their real salary, but
everyone now knows if they are paid above or below
the average for the group.
Stuff the Internet needs: home fiber connections, symmetrical, flat rate, on neutral terms.
Stuff the Internet is going nuts over: cryptocurrencies.
Big problem with building fiber to the home: capital.
Big problem with cryptocurrencies: stability.
Two problems, one solution? Hard to make any kind
of currency useful without something stable, with
evidence-based value, to tie its value to. Fiat currencies are tied to something of
value? Yes, people have to pay taxes in them.
Hard to raise capital for "dumb pipe" Internet service
because it's just worth about the same thing, month
after month. So what if we could combine the hotness
and capital-attractiveness of cryptocurrencies with
the stability and actual usefulness of fiber?
One quick question for anyone who still isn't
convinced that tracking protection needs to
be a high priority for web browsers in 2018.
Web tracking isn't just about
items from your online shopping cart following you
to other sites. Users who are vulnerable to abusive
practices for health or other reasons have tracking
protection needs too.
Who has access to the data from each of the 24
third-party trackers that appear on the American
Cancer Society's Find Cancer Treatment and
and for what purposes can they use the data?
Well, in order to help slow down the spread of
political speech enforcement that is apparently
stopping all of us cool innovator type people
from saying the Things We Can't Say, here's a Git
hook to make sure that
every time you blog, you include at least one of the
If you blog without including one of the forbidden
words, you're obviously internalizing censorship
and need more freedom, which you can maybe get by
getting out of California for a while. After all,
a lot of people here seem to think that "innovation"
is building more creepy surveillance as long as you
call it "growth hacking" or writing apps to get
members of the precariat to do the stuff that your
Mom used to do for you.
You only have to include one forbidden word
every time you commit a blog entry, not in every
file. You only need forbidden words in blog
entries, not in scripts or templates. You can
always get around the forbidden word check with the
Just to catch up, bug futures, an experimental
kind of agreement being developed by the
project, are futures contracts based on the status
of bugs in a bug tracker.
For developers: vist Bugmark to find an open issue
that matches your skills and interests. Buy a futures
contract connected to that issue that will pay you
when the issue is fixed. Work on the issue, in the
open—then decide if you want to hold your contract
until maturity, or sell it at a profit. Report an
issue and pay to reward others to fix it
For users: Create a new issue on the project
bug tracker, or select an existing one. Buy a
futures contract on that issue that will cost you a
known amount when the issue is fixed, or pay you to
compensate you if the issue goes unfixed. Reduce your
exposure to software risks by directly signaling the
project participants about what issues are important
to you. Invest in futures on an open source market
Bug futures also open up the possibility of
incentivizing other kinds of work, such as clarifying
and translating bug reports, triaging bugs, writing
failing tests, or doing code reviews—and
especially arbitrage of bugs from project to project.
Bug futures are different from open source bounty
systems, what have been repeatedly tried but
have so far failed to take off. The big problem with
conventional open source bounty systems is that,
as far as I can tell, they fail to incentivize
cooperative work, and in a lot of situations might
incentivize un-cooperative behavior. If I find a
bug in a web application, and offer a bounty to fix
on the CSS might choose not to share partial work
in order to contend for the entire bounty. Likewise,
the developer who fixes the CSS part of the bug might
are structured, if the two wanted to split the bounty
they would need to find, trust, and coordinate with
each other. Meanwhile, if the bug was the subject of
write up a good commit message explaining how their
partial work made progress toward a fix, and offer
to sell their side of the contract. A CSS developer
could take on the rest of the work by buying out
Futures trading and risk shifts
But will bug futures tend to shift the risks
of software development away from the "owners"
of software (the owners don't have to be copyright
holders, they could be those who benefit from network
effects) and toward the workers who develop, maintain,
and support it?
I don't know, but I think that the difference between
bug trackers and piecework is where you put the
brains of the operation. In piecework and the gig
economy, the matching of workers to tasks is done by
management, either manually or in software. Workers
can set the rate at which they work in conventional
piecework, or accept and reject tasks offered to them
in the gig economy, but only management can have a
view of all available tasks.
Bug futures operate within a commons-based peer
environment, though. In an ideal peer production
scene, all participants can see all available
tasks, and select the most rewarding tasks. Somewhere in the economics literature
there is probably a model of task selection in open
source development, and if I knew where to find it I
could put an impressive LaTeX equation right around
here. Of course, open source still has all
kinds of barriers that make matching of workers to
tasks less than ideal, but it's a good goal to keep
If you do bug futures right, they interfere
as little as possible with the peer production
advantage—that it enables workers to match
themselves to tasks. And the futures market adds
the ability for people who are knowledgeable about
the likelihood of completion of a task, usually those
who can do the task, to profit from that knowledge.
Rather than paying a worker directly for performing a
task, bug futures are about trading on the outcomes
of tasks. When participating, you're not trading
labor for money, you're trading on information you
hold about the likelihood of successful completion
of a task. As in conventional financial markets,
information must be present on the edges, with
the individual participants, in order for them to
participate. If a feature is worth $1000 to me,
and someone knows how to fix it in five minutes, bug
futures could facilitate a trade that's profitable
to both ends. If the market design is done right,
then most of that value gets captured by the
endpoints—the user and developer who know when
to make the right trade.
The transaction costs of trading in information tend
to be lower than the transaction costs of trading
in labor, for a variety of reasons which you will
probably believe in to different extents depending on
your politics. What if we could replace some direct
trading in labor with trading in the outcomes of that
labor by trading information? Lower transaction costs,
more gains from trade, more value created.
As far as I can tell, there are three kinds of open source metrics.
Impact metrics cover how much value the software
creates. Possible good ones include count of projects
dependent on this one, mentions of this project in job
postings, books, papers, and conference talks, and,
of course sales of products that bundle this project.
Contributor reward metrics cover how the software
is a positive experience for the people who contribute
to it. Job postings are a contributor reward metric
as well as an impact metric. Contributor retention
metrics and positive results on contributor experience
surveys are some other examples.
But impact metrics and contributor reward metrics
tend to be harder to collect, or slower-moving,
than other kinds of metrics, which I'll lump
together as activity metrics. Activity
metrics include most of the things you see on
open source project dashboards, such as pull
request counts, time to respond to bug reports,
and many others. Other activity metrics can
be the output of natural language processing on
project discussions. An example of that is FOSS
which does sentiment analysis, but you could also do
other kinds of metrics based on text.
IMHO, the most interesting questions in the open
source metrics area are all about: how do you
predict impact metrics and contributor reward metrics
from activity metrics? Activity metrics are easy to
automate, and make a nice-looking dashboard, but there
are many activity metrics to choose from—so
which ones should you look at?
Which activity metrics are correlated to any impact
Which activity metrics are correlated to any
contributor reward metrics?
Those questions are key to deciding which of the activity
metrics to pay attention to. I'm optimistic that we'll be
seeing some interesting correlations soon.
40 trackers. Not bad, but not
especially good either. That purple
of data leakage—third-party trackers that
forced Linux Journal into an advertising race to the
bottom against low-value and fraud sites—is not
so deep as a well, nor so wide as a church door...but
it's there. A magazine that was a going concern in
print tried to make the move to the web and didn't
Right now, advertising on the site you're
writing to probably isn't saleable without the
creepy trackers. (User tracking as Chesterton's
So what can privacy people productively ask sites for?
Some good ones are:
Fix any "turn off your ad blocker" scripts to detect
ad blockers only, and not falsely alert on privacy tools.
Remove links to the the confusing and broken
"YourAdChoices" site. Adtech opt-outs don't cover
all trackers, and are much less effective than real
privacy tools. (I have never had all the opt-outs
work on that site, even from a fresh, pristine
browser. Somehow I get the sense that the adtech
firms don't exactly put their best people on it.)
Link to the privacy pages for the third parties
the site uses. If the advertising on the site
is set up so that this is hard to do, and users
might see a tracker from an unknown domain, say so.
Fix up the privacy page to add links to appropriate
privacy tools based on the user's browser.
Better to have users on privacy tools
than get enrolled in a paid whitelisting
If you maintain a privacy tool, offer to do a
campaign with the site. Privacy tool users are
high-quality human traffic. Free or discounted
privacy tools might work as a subscription
promotion. Where's the win-win?
Asking a site to walk away from money with no
credible alternative is probably not going to work.
Asking a site to consider next steps to get out of the
current web advertising mess? That might.
In the country where I live, kidnapping for ransom
is not a very common crime.
That's because picking up the ransom is too risky.
It's easy to kidnap someone, and easy to let the
person go when the ransom is paid, but picking up
the ransom exposes you. Wannabe kidnappers who are
motivated by money tend to choose other crimes.
As the [family relationship redacted] of a [family
member information redacted], I'm happy that
kidnapping is difficult here. High transaction costs
for some kinds of transaction are a good thing.
Now, here comes Bitcoin.
As we're already seeing with ransomware,
harder-to-trace ransom drops are now a thing.
So, even though I don't actually hold Bitcoin, someone
could grab my family member (low risk), demand that
I exchange some of my conventional assets for Bitcoin
(low risk) and send the Bitcoin as ransom (low risk).
The balance between risk and reward for the crime of
kidnapping for ransom has changed.
Make the Bitcoin business eat the costs of payments
made under duress.
New rule: If I ever trade any assets for Bitcoin in
order to comply with a threat, and then transfer
the Bitcoin under duress (kidnapping, ransomware,
whatever), then I can go back to whoever I gave the
assets to with a copy of the police report on the
incident and get my original assets (and any fees)
Yes, that makes it harder for regular people to trade
assets for Bitcoin. Exchanges would have to hold the
money for a while, check that I'm not under duress,
and probably do all kinds of other pain-in-the-ass,
possibly costly, work. But I'd rather have that than
But what about sites that mistakenly detect Tracking
Protection as "an ad blocker" and give you grief
about it? Do you have to turn Tracking Protection off?
So far I have found that the
answer is usually no. I can usually use
(After all, if a web developer can't tell an ad
blocker from a tracking protection tool, I don't
NJS will also deal with a lot of "growth hacking"
tricks such as newsletter signup forms that appear in
front of the main article. And it defaults to on,
until I decide that they're better off without it.
HTTPS Everywhere. This is pretty basic. Use the encrypted version of a site where available.
Link Cleaner. Get rid of crappy tracking parameters in URLs, and speed up
some navigation by skipping data collection redirects.
Ever notice how the sites that use
hacking" such as newsletter popups? This add-on keeps
Privacy Badger is not on here just because I'm using Firefox Tracking Protection. I like both.
Blogging, development and testing
blind-reviews. This is an experiment to help break your own habits of
bias when reviewing code contributions. It hides the contributor name and email when you first see the code, and you can reveal it later.
Right now it just does Bugzilla, but watch this space for an upcoming GitHub version. (more info)
Copy as Markdown. Not quite as full-featured as the old "Copy as HTML Link" but
still a time-saver for blogging. Copy both the page title and URL, formatted as Markdown, for pasting into a blog.
Firefox Pioneer. Participate in Firefox user research. Studies have extremely strict and detailed privacy policies.
Test Pilot. Try new Firefox features. Tracking Protection was on Test Pilot for a while. Right now there is a new speech
recognition one, an in-browser notepad, and more.
Advanced (for now) nerdery
Cookie AutoDelete. Similar to the old "Self-Destructing Cookies". Cleans up cookies after leaving a site.
Useful but requires me to whitelist the sites where I want to stay logged in. More time-consuming than other privacy tools.
is new. Privacy Pass interacts with
supporting websites to introduce an anonymous
user-authentication mechanism. In particular,
Privacy Pass is suitable for cases where a user
is required to complete some proof-of-work
(e.g. solving an internet challenge) to
authenticate to a service. Right now I don't
use any sites that have it, but it could be a great
way to distribute "tickets" for reading articles
or leaving comments.
What to use instead? For most
people, either the built-in Firefox Tracking
or EFF's Privacy
will provide good protection. I would try one or
both of those before a conventional ad blocker.
If sites have a broken ad blocker detector that
falsely identifies a tracking protection tool
as an ad blocker, you can usually get around
If you still want to get rid of more ads
and join the blocker vs. anti-blocker
game (I don't), there's always uBlock
which does not do paid whitelisting.
The project site has more
info). But try
either the built-in tracking protection or Privacy
From the point of view of users,
web advertising has failed to hold
up its end of the signal for attention
and substituted nasty attempts at manipulation. No
wonder people block it.
From the point of view of clients, web
advertising has failed to meet the basic honesty
standards that any third-rate print publication can.
And every web advertising company is calling fraud an
industry-wide problem, which is what business
people say when they really don't care about fixing
From the point of view of publishers, web
advertising has failed to show the proverbial money.
It's stuck at a fraction of the value per user minute
that print can pull in, which means that as print
goes away, so does the ad money.
Web advertising has failed the audience, the
advertisers, and the people who make ad-supported
news and cultural works. Maybe I should go
be a fan of something else, like securitizing bug
or something. Web advertising just is that
annoying, creepy thing that browsers are
competing to block in different, creative, ways.
[T]he online ad sector transitioned from a
creative-led industry to a data and algorithms-led
industry, wrote venture capitalist Adam
who is understandably proud of not investing in it.
Some new companies, such as
Scroll, are all about
making it easier for readers to buy out of seeing
advertising. Advertising is to web sites as annoying
"UNREGISTERED SHAREWARE" banners and dialogs are to
At least search advertising is working. Bob
it a "much better yellow pages." But any kind of
brand-building, signal-carrying advertising, where
most of the money is? Not there. Ever notice how
much of the evidence for "data-driven" advertising
Is anyone speaking up for web advertising?
Not really. Where advertising still has a policy
voice, it's a bunch of cut-and-paste anti-privacy
advocacy that sounds like what you might get from
eighth grade Libertarians, or from people who are so
bad at math they assume that it's humanly possible
to read and understand Terms of Service from 70
third-party trackers on one web page. The Interactive
Advertising Bureau has become the voice of schemes
that are a few pages of fine print away from
malware and spam. By expanding to include members
whose interests oppose those of legit publishers and
and defending every creepy user privacy violation
scheme that the worst members come up with, an
organization that could have been a voice for
pro-advertising policy positions has made itself
meaningless. Right now the IAB is about as relevant
to web advertising policy as the Tetraethyl Lead
Industry Association is relevant to transportation
Bad news all the way around, right? But some of us
have been somewhere like this before.
Remember the operating systems market in the late 1990s?
All the right-thinking people were going Windows NT.
Yes, even Tim O'Reilly, who built version 1.0 of
his company on Unix, had apparently written it off.
The spring 1998 O’Reilly catalog had all Windows
books on the cover, and the Unix stuff was in back.
O’Reilly and Associates was promoting the
company’s first and only shrink-wrap software,
a web server for Windows NT.
And why not? Bickering Unix vendors were doing
short-sighted stunts such as removing the compiler
from the basic version, and charging hard-to-justify
prices for workstations and servers that users could
beat with a properly-configured PC. Who needed it?
We know what happened shortly after that. The Unix
scene Did anyone ever make a
"Lumascape"-like chart of the Unix vendors?
faded away and, with enough drama to make for good
IT news coverage but not enough to interfere with
successful efforts to fix the Year 2000 Problem,
the Linux scene replaced it.
The good news is that people employed in the Unix
scene were able to move, in most cases happily, to the
Linux scene. (Which is big enough that it has become
the OS for the "IoT", "Saas" and "Cloud" businesses,
and a majority of "mobile" by units, but not of
course profits) So maybe my experience living through
the end of Unix is why I'm still a web advertising
optimist. The economic niche for advertising hasn't
gone away. Just as software had to get some important
decisions right in order to make the Linux boom
happen, web advertising is so close to getting it
right, too. Now that we know the basics...
People have norms about data sharing. Browsers
must reflect those norms or get replaced.
Always run a shell script in the directory in which
it appears, and change back to the directory you were
in when you ran it even if it fails.
trap popd EXIT
cd $(dirname "$0")
Works for me in bash. The pushd command does a cd
but saves the directory where you were on a stack, and
popd pops the saved directory from the stack. The
trap ... EXIT is a bash way to run something when
the script exits, no matter how, and dirname "$0"
is the directory name of the script.
(Taken from the deploy.sh script that rebuilds and
deploys this blog, so if you can read this, it works.)
What's this? It's basically the spawn of Git
and a NoSQL database. So why would anybody want
to make that? For Science, of course. A lot of
research produces huge data files, and people would
like to have a resilient way to collaborate on them,
using commands they already know—but have it
scale horizontally across large numbers of nodes,
Git has the advantage that a lot of people know it,
but it doesn't really handle huge files that well.
There are add-on solutions to make it work by
connecting to another system for handling large files,
but then you have to set up and trust two systems.
And one of my favorite properties of Git is that any
authorized user of a project can check the integrity
of the entire project back to the beginning.
So what Attaca does is to consistently split huge
files across a cluster, using cluster nodes that can
be cheap VPSs, low-end servers with spinning disks,
whatever. (In the test environment, nodes are just
Next steps are to test it out with some scientific
data (genomes, medical imaging, and so on), implement
some more Git commands so that people can check files
out and not just in, and build a (Raspberry Pi?)
Please stop by our demo of Trading futures, fixing bugs: a live Smart Contracts installation.
What is it?
Bugmark is a market that connects people who want better software to the people
who can build it.
In order to make open collabration more effective, we are using
simple market mechanisms to add incentives to do useful work.
Bugmark allows you to
Put financial value directly in the hands of the
people who can fix the software issues that are most
important to you.
Discover which issues really matter to your project's users.
Work with open source practices and not against them. Solve part of a problem and still get paid, instead of
contending to claim credit for a bounty payment.
Find an issue, fix it, and earn money
Vist Bugmark to find an open issue that matches
your skills and interests. Buy a futures contract
connected to that issue that will pay you when
the issue is fixed. Work on the issue, in the
open—then decide if you want to hold your
contract until maturity, or sell it at a profit.
Report an issue and pay to reward others to fix it
Create a new issue on the project bug tracker, or
select an existing one. Buy a futures contract on
that issue that will cost you a known amount when the
issue is fixed, or pay you to compensate you if the
issue goes unfixed. Reduce your exposure to software
risks by directly signaling the project participants
about what issues are important to you.
Invest in futures on an open source market
Development isn't the only task required to make a
software project a success. You can trade futures
to earn a profit from other vital tasks, such as
clarifying and translating bug reports, triaging bugs,
writing failing tests, or doing code reviews.
Looking for a way to get dedicated readers
to un-block some of the ads on your site?
One way could be to update and integrate the
Our ads contain code that encrypts an empty message with the AdLeaks public key and sends the ciphertext back to AdLeaks. This happens on all users' web browsers. A whistleblower's browser substitutes the ciphertext with encrypted parts of a disclosure. The protocol ensures that an adversary who can eavesdrop on the network communication cannot distinguish between the transmissions of regular browsers and those of whistleblowers' browsers.
Naturally sites would want to encourage whistleblowers
(and others) to block the regular creepy ad
trackers—but building post-creepy ads and
hooking this up to them could be a way to encourage
the dedicated readers to treat the high-reputation
ads differently from the low-reputation ones.
(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)
The following is an interesting business model, so I'm
going to tell it whether it's true or not. I once
talked with a guy from rural China about the tofu
business when he was there. Apparently, considering
the price of soybeans and the price you can get for
the tofu, you don't earn a profit just making and
selling tofu. So why do it? Because it leaves you
with a bunch of soybean waste, you feed that to pigs,
and you make your real money in the hog business.
Which is sort of related to the problem that (all
together now) hard news isn't brand-safe. It's
hard to sell travel agency ads on a plane crash story,
or real estate ads on a story about asbestos in the
local elementary schools, or any kind of ads on a
disturbing, but hard to look away from, political
In the old-school newspaper business, the profitable
ads can go in the lifestyle or travel sections, and
subsidize the hard news operation. The hard news is
the tofu and the brand-friendly sections are the hogs.
On the web, though, where you have a lot of readers
coming in from social sites, they might be getting
their brand-friendly content from somewhere else.
Sites that are popular for their hard news are stuck
with just the tofu.
The browser may have to adapt to
treat trustworthy and untrustworthy sites
in order to come up with a good balance of keeping
sites working and implementing user norms on data
sharing. Will news sites that publish hard news
stories that are often visited, shared, and commented
on, get a user data advantage that translates into
ad saleability for their more brand-safe pages?
Does better user data control mean getting the hog
(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)
Browsers are going to have to change tracking
protection defaults, just because the settings
that help acquire and retain users are different
from the current defaults that leave users fully
trackable all the time. (Tracking protection
is also an opportunity for open web players to
differentiate themselves from mobile tracking
Before switching defaults, there are a bunch of
opportunities to do collaboration and data collection
in order to make the right choices and increase user
satisfaction and trust (and retention). Interestingly
enough, these tend to give an advantage to any browser
that can attract a diverse, opinionated, values-driven
Do innovation challenges and crowdsourcing for
tracking protection tools. Use the results to expand
the available APIs and built-in options.
Develop a variety of tracking protection
methods, and ship them in a turned-off
state so that motivated users can find
the configuration and experiment with
them, and to enable user research. Borrow
approaches from other browsers (such as Apple
where possible, and test them.
For example: avoid blocklist politics, and increase
surveillance marketing uncertainty, by building
Privacy-Badger-like tracker detection. Enable
tracking protection without the policy implications
of a top-down list. This is an opportunity for a
crowdsourcing challenge: design better algorithms
to detect trackers, and block them or scramble
Ship alternate experimental builds of the browser,
with privacy settings turned on and/or add-ons
Communicate a lot about capabilities, values,
and research. Spend time discussing what the
browser can do if needed, and discussing the
results of research on how users prefer to share
their personal info.
Only communicate a little about future defaults.
When asked about specifics, just say, "we'll
let the user data help us make that decision."
(Do spam filters share their filtering rules with
spammers? Do search engines give their algorithms
to SEO consultants?)
Build functionality to "learn" from the user's
activity and suggest specific settings that
differ from the defaults (in either direction).
For example, suggest more protective settings
to users who have shown an interest in
privacy—especially users who have installed
any add-on whose maintainers misrepresent it as
a privacy tool.
Do research to help legit publishers and marketers learn more
about adfraud and how it is enabled by the same
kinds of cross-site tracking that users dislike.
As marketers better understand the risk levels of
different approaches to web advertising, make it
a better choice to rely less on highly intrusive
tracking and more on reputation-driven placements.
Provide documentation and tutorials to help web
developers develop and test sites that
will work in the presence of a variety of privacy
settings. "Does it pass Privacy Badger" is a good
start, but more QA tools are needed.
If you do it right, you can force up the risks of future
surveillance marketing just by increasing the uncertainty
of future user trackability, and drive more marketing
investment away from creepy projects and toward pro-web,
This is OFF MESSAGE. No Mozilla policy here.
This is my personal blog.
(This is the text from my talk at the Reynolds
Journalism Institute's Revenue Models that
event, with some links added. Not exactly as
Hi. I may be the token advertising optimist here.
Before we write off advertising, I just want
to try to figure out the answer to: why can't
Internet publishers make advertising work as well
as publishers used to be able to make it work when
they were breathing fumes from molten lead all day?
Has the Internet really made something that much
I have bought online advertising, written and edited
for ad-supported sites, had root access to some
of the servers of an adtech firm that you probably
have cookies from right now, and have written an ad
blocker. Now I work for Mozilla. I don't have
any special knowledge on what exactly Mozilla intends
to do about third-party cookies, or fingerprinting,
or ad blocking, but I can share some of what I have
learned about users' values, and some facts about the
browser business that will inform those decision for
Mozilla and other browsers.
First of all, I want to cover how new privacy
tools are breaking web advertising as we know it.
But that's fine. People don't like web advertising
as we know it.
They surveyed 300 publishers, adtech people, brands,
and various others, on whether users will consent
to tracking under the GDPR and the ePrivacy
The survey asked if users would allow for tracking on
one site only, and for one brand only, in addition to
“analytics partners”. 79% of respondents said
they would click “No” to this limited consent
And what kind of tracking policy would people prefer
in the browser by default? The European Parliament
suggested that “Accept only first party tracking”
should be the default. But only 20% of respondents
said they would select this. Only 5% were willing
to “accept all tracking”. 56% said they would
select “Reject tracking unless strictly necessary
for services I request”. The very large majority
(81%) of respondents said they would not consent to
having their behaviour tracked by companies other
than the website they are visiting.
Users say that they really don't like being tracked.
So, right about now is where you should be pointing
out that what people say about what they want is
often different from what they do.
It's hard to see exactly what
people do about particular ads, but we can see some
indirect evidence that what people do about creepy
ads is consistent with what they say about privacy.
First, ad blockers didn't catch on until people started to see retargeting.
Second, companies indirectly reveal their user research in
policies and design decisions.
Back in 1998, when Google was still
"google.stanford.edu" I wrote an ad blocker.
And there were a bunch of other pretty good ones
in the late 1990s, too. WebWasher, AdSubtract,
Internet Junkbuster. But none of that stuff caught
on. That was back when most people were on dialup,
and downloading a 468x60 banner ad was a big deal.
That's before browsers came with pop-up blockers,
so a pop-up was a whole new browser window and those
could get annoying real fast.
But users didn't really get into
ad blocking. What changed between then and
Retargeting. People could see that the ad on one
site had "followed them" from a previous site.
That creeped them out.
Some Facebook research clearly led in the same direction.
As we should all know by now, Facebook enables an
extremely fine level of micro-targeting.
Yes, you can target 15 people in Florida.
But how do the users feel about this?
We can't see Facebook's research. But we
can see the result of it, in Facebook Advertising
If you buy an ad on Facebook, you can target people
based on all kinds of personal info, but you can't
reveal that you did it.
Ads must not contain content that asserts or
implies personal attributes. This includes direct
or indirect assertions or implications about a
person’s race, ethnic origin, religion, beliefs,
age, sexual orientation or practices, gender identity,
disability, medical condition (including physical
or mental health), financial status, membership in
a trade union, criminal record, or name.
So you can say "meet singles near you" but you
can't say "other singles". You can offer depression
counseling in an ad, but you can't say "treat your
Facebook is constantly researching and tweaking
their site, and, of course, trying to sell ads.
If personalized targeting didn't creep people the
hell out, then the ad policy wouldn't make you hide
that you were doing it.
All right, so users don't want to be followed around.
Content Neutrality: Content blocking software
should focus on addressing potential user needs
(such as on performance, security, and privacy)
instead of blocking specific types of content (such
Transparency & Control: The content blocking
software should provide users with transparency and
meaningful controls over the needs it is attempting
Openness: Blocking should maintain a
level playing field and should block under the
same principles regardless of source of the
content. Publishers and other content providers
should be given ways to participate in an open Web
ecosystem, instead of being placed in a permanent
penalty box that closes off the Web to their products
If we have all those great values though, why aren't
we doing more to protect users from tracking?
Here's the problem from the browser point of view.
Firefox had a tracking protection feature in
But the mainstream browsers have always been held
back by two things.
First, browser developers have been cautious about not breaking
sites. We know that users prefer not to be
tracked from site to site, but we know that they
get really mad when a site that used to work just
stops working. There is a lot of code in a lot of
browsers to handle stuff that no self-respecting
web designer has done for decades. Remember
the 1996 movie "Space Jam"? Check out the web
some time. It's a point of pride to keep all that
1996 web design working. And seriously, one of
those old 1996 vintage pages might be the web-based
control panel for somebody's emergency generator, or
something. Yes, browsers consider the users' values
on tracking, but priority one is not breaking stuff.
And that includes third-party resources that are not
creepy ad trackers—stuff like shopping carts and
comment forms and who knows what.
Besides not breaking sites, the other thing that keeps
browsers from implementing users' values on tracking
is that we know people like free stuff. For a long
time, browsers didn't have enough good data,
so have deferred to the adtech business when they
talk about how sites make money. It looks obvious,
right? Sites that release free stuff make money from
ads, ads work a certain way, so if you interfere
with how the ads work, then sites make less money,
and users don't get the free stuff.
Mozilla backed down on third-party cookies in 2013,
and again on tracking protection in 2015.
Microsoft backed down on Tracking Protection Lists.
Both times, after the adtech industry made a big fuss about it.
So what changed? Why is now different?
Well, that's an easy answer, right? Apple put
Intelligent Tracking Prevention into their Safari
browser, and now everybody else has to catch up.
Apple so far is putting
their users ahead of the usual alarmed
from the adtech people. Steven Sinofsky, former
president of the Windows Division at Microsoft,
Stand strong Apple [rhetorical]. Had these groups come after us trying to offer browsing safety. MS backed down. https://t.co/L4saQpdE7l
You're going to see other browsers make moves that
look like they're "following Safari" but really,
browsers are not so much following each other as
making similar decisions based on similar information.
When users share their values they say that they want
control over their information.
When users see advertising that seems "creepy"
we can see them take steps to avoid ads following
Some people say, well, if users really want privacy,
why don't they pay for privacy products? That's not
how humans work. Users don't pay for privacy, because
we don't pay other people to come into compliance with
basic social norms. We don't pay our loud neighbors
to quiet down.
Apple looks like a magic company that releases
magic things that they make up out of their own
heads. "Designed by Apple in California." This is
a great show. It's part of their brand. I have
a lot of respect for their ability to make things
But that doesn't mean that they just make stuff up.
Apple does a lot of user research. Every so often
we get a little peek behind the curtain when there
is discovery in a lawsuit. They do research on their
own users, on Samsung's users, everybody.
Mozilla has user research, too.
For a long time, browser people thought that there
was a conflict between giving the users something
that complies with their tracking norms and giving
them something that keeps them happy with the sites
they want to use.
But now it turns out that we have some ways that we
could act in accordance with user values that also
produce measurably more satisfied users.
How badly does privacy protection break sites?
Mozilla's testing team has built, deployed to users,
and tested nine different sets of cookie and tracking
Lots of people thought there are going to be things
that break sites and protect users, or leave sites
working and leave users vulnerable.
It turns out that there is a configuration that gives
both better values alignment and less breakage.
Because a lot of that breakage is caused by
We're learning that in a few important areas,
even though Apple Safari is in the lead, Apple's
Intelligent Tracking Prevention doesn't go far enough.
What users want
It turns out that when you do research with people
who are not current users of ad blockers, and
offer them choices of features, the popular choices
are tracking blockers, malvertising protection,
and blocking annoying ads such as auto-play videos.
Among those users who aren't already using an ad
blocker, the offer of an ad blocker wasn't as popular.
Yes, people want to see fewer annoying ads. And
nobody likes malware. But people are also interested
in protection from tracking. Some users even put
tracking protection ahead of malvertising protection.
If you only ask about annoying ad formats you get a
list of which ad formats are popular now but get on
people's nerves. This is where Google is now. I have
no doubt that they'll catch up. Everyone who’s ever
moderated a comment section knows what the terrible
ads are. And any publisher has the motivation to
moderate and impose standards on the ads on their
site. Finding which ads are the crappy ones are
not the problem. The problem is that legit sites
and crappy sites are in the same ad space market,
competing for the same eyeballs. As a legit site,
you have less market power to turn down an ad that
does not meet your policies.
We are coming to an understanding of where users
stand. In a lot of ways we're repeating the early
development of spam filters, but in slow motion.
Today, a spam filter seems like a must-have feature
for any email service. But MSN started talking about
its spam filtering back when Sanford Wallace, the
“Spam King,” was saying stuff like this.
I have to admit that some people hate me,
but I have to tell you something about hate. If
sending an electronic advertisement through email
warrants hate, then my answer to those people is
“Get a life. Don’t hate somebody for sending an
advertisement through email.” There are people out
there that also like us.
According to spammers, spam filtering was just
Internet nerds complaining about something that
regular users actually like. But the spam debate
ended when big online services, starting with MSN,
started talking about how they build for their
real users instead of for Wallace’s hypothetical
If you missed the email spam debate, don’t
worry. Wallace’s talking points about spam filters
constantly get recycled by the IAB and the DMA,
every time a browser makes a move toward tracking
protection. But now it’s not email spam that users
supposedly crave. Today, they tell us that users
really want those ads that follow them around.
So here's the problem. Users are clear about their
values and preferences. Browsers must reflect user values
and preferences. Browsers have enough of a critical mass
of users demanding better protection from tracking
that browsers are going to have to move or become
That's what the email providers did on spam.
There were not enough pro-spam users to support an
email service without a spam filter.
And there may not be enough pro-targeting users to
support a browser without privacy tools.
As I said, I do not know exactly how Mozilla is going
to handle this, but every browser is going to have to.
But I can make one safe prediction.
Browsers need users. Users prefer tracking
protection. I'm going to make a really stupid,
safe prediction here.
User adoption of tracking protection will not affect
the amount of user data available, or affect any
measurement of number of targeted ad impressions
available in any way.
Every missing trackable user will be replaced by an
Every missing piece of user data will be replaced by
an "inferred" piece of data.
How much adfraud is there really?
There are people who will stand up and say that
we have 2 percent fraud, or 85 percent. Of course
it's different from campaign to campaign and some
advertisers get burned worse than others.
You can see "IAS safe traffic" on fraud boards.
Because video views are worth so much more,
the smartest bots go there. We do know that when
you look for adfraud seriously, you can find it.
Just recently the Financial Times found a bunch.
The publisher has
found display ads against inventory masquerading as
FT.com on 10 separate ad exchanges and video ads on
15 exchanges, even though the FT doesn’t even sell
video ads programmatically, with 300 accounts selling
inventory purporting to be the FT’s. The scale of
the fraud uncovered is vast — the equivalent of one
month’s supply of bona fide FT.com video inventory
was fraudulently appearing in a single day.
[W]e were seated next to the head of this advertising
company, who said to me something like, "Well, I
really always liked AllThingsD and in your first week
I think Recode’s produced some really interesting
stuff." And I said, "Great, so you’re going to
advertise there, right? Or place ads there." And he
said, "Well, let me just tell you the truth. We’re
going to place ads there for a little bit, we’re
going to drop cookies, we’re going to figure out
who your readers are, we’re going to find out what
other websites they go to that are way cheaper than
your website and then we’re gonna pull our ads from
your website and move them there."
The current web advertising system is based
on paying publishers less, charge brands more.
Revenue share for legit publishers is at 30 to 40
according to the Association of National Advertisers.
But all revenue split numbers are wrong because
undetected fraud ends up in the ‘publisher’ share.
When your model is based on data leakage, on catching
valuable eyeballs on cheap sites, the inevitable
overspray is fraud.
People aren't even paying attention to what could be the biggest form of adfraud.
Part of the conventional wisdom on adfraud is that
you can beat it by tracking users all the way to a
sale, and filter the bots out that way. After all,
if they made a bot good enough to actually buy stuff
it wouldn't be a problem for the client.
But the attribution models that connect impressions to
sales are, well, they're hard enough to understand
that most of the people who understand them are
probably fraud hackers.
The dispute betwen Steelhouse and Criteo
settled last year, so we didn't get to see how two
real adtech companies might or might not have been
hacking each other's attribution numbers.
But today we have another chance.
I used to work for Linux Journal, and we followed
the SCO case pretty intently. There was even
a dedicated news site just about the case, called
Groklaw. If there's a case that needs a Groklaw for
web advertising, it's Uber v. Fetch.
This is the closest we have to a tool to help us
understand attribution fraud. When the bad guys
have the ability to make bogus ads claim credit for
real sales, that's a much more powerful motivation
for fraud than just making a bot that looks like
a real user watching a video.
Legit publishers have a real incentive to find and
control adfraud. Adtech intermediaries, not so much.
That's because the core value of ad tech is to find
the big money user at the cheapest possible site. If
you create that kind of industry, you create the
incentive for fraud bots who appear to be members
of a valuable audience. You create incentives to
produce fraudulent sites because all of a sudden,
those kinds of sites have market value that they
would not otherwise have had because of data leakage.
As browsers and sites implement user norms on
tracking, they get fraud protection for free.
So where is the outrage on adfraud?
I thought I could write a script for a heist movie about adfraud.
At first I thought, this is awesome! Computer
hacking, big corporations losing billions of
dollars—should be a formula for an awesome heist
Every heist movie has a bunch of scenes that
introduce the characters, you know, getting the crew
together. Forget it. All the parts of adfraud can be
done independently and connected on the free market.
It's all on a bunch of dumb-looking PHP web boards.
There go a whole bunch of great scenes.
Hard-boiled detectives trying to catch the gang? More
like over easy. The adtech industry "committed
$1.5 million in funding" (and set up a 24-member
committee!) to fight an eleven billion dollar
problem. Adfraud isn't taking candy from a baby,
it's taking candy from a dude whose job is giving
away candy. More fraud means more money for adtech
Dramatic risk of getting caught? Not a chance of going
to prison—the worst that happens is that some of
the characters get their accounts or domains banned,
and they have to make new ones. The adfraud movie's
production designer is going to have to work awful
hard to make that "Access denied" screen look cool
enough to keep the audience awake.
So the movie idea is a no-go, but as people learn
that today's web ads don't just leave the publisher
with 30 percent but also feed fraud, we should see
a flight to quality effect.
The technical decisions that enabled the Lumascape
to rip off Walt Mossberg are the same decisions that
facilitate fraud, are the same decisions that make
users come looking for tracking protection.
I said I was an advertising optimist and here's why.
The tracking protection trend is splitting web advertising.
We have the existing high-tracking, high-fraud market
and a new low-tracking opportunity.
Some users are getting better protected from
The bad news is that it will be harder to serve
those users a lucrative ad enabled by third-party
The good news is that those users can't be tracked
from high-value to low-value sites. Those users
start to become possible to tell apart from fraudbots.
For that subset of users, web advertising starts
to shift from a hacking game to a reputation game.
In order to sell advertising you need to give
the advertiser some credible information on who
the audience is. Most browsers have been bad at
protecting personal information about the user,
so web advertising has become a game where a whole
bunch of companies compete to covertly capture as
much user info as they can.
But some browsers are getting better at implementing
people’s preferences about sharing their
information. The result, for those users, is a
change in the rules of the game. Investment in taking
people’s personal info is becoming less rewarding,
as browsers compete to reflect people’s preferences.
And investments in building sites and brands that are
trustworthy enough for people to want to share their
information will tend to become more rewarding. This
shift naturally leads to complaints from people who
are used to winning the old game, but will probably
be better for customers who want to use trustworthy
brands and for people who want to earn money by making
ad-supported news and cultural works.
There are people building a new web advertising
system around user-permissioned information, and
they've been doing it for a long time. But until now,
nobody really wants to deal with them, because adtech
is just selling that information taken from the user
without permission. Tracking protection will be the
motivation for forward-thinking brand people to catch
the flight to quality and shift web ad spending from
the hacking game to the reputation game.
Now that we have better understanding of how user
norms are aligned with the interests of independent
browsers and with the interests of high-reputation
sites, what's next?
Measure the tracking-protected audience
Legit sites are in a strong position to gather
some important data that will shift web ads from a
hacking game to a reputation game. Let's measure
the tracking-protected audience.
Tracking protection is a powerful sign of a
human audience. A legit site can report a tracking
protection percentage for its audience, and any adtech
intermediary who claims to offer advertisers the same
audience, but delivers a suspiciously low tracking
protection number, is clearly pushing a mismatched
or bot-heavy audience and is going to have a harder
time getting away with it.
Showing prospective advertisers your
tracking protection data lets you reveal the tarnish
on the adtech "Holy Grail"—the promise of high-value
eyeballs on crappy sites.
You can't sell advertising without data on who the
audience is. Much of that data will have to come from
the tracking-protected audience. When quality sites
share tracking protection data with advertisers,
that helps expose the adfraud that intermediaries
have no incentive to track down.
This is an opportunity for service journalism.
Users are already concerned and confused about web
ads. That's an opportunity that some legit sites
such as the Wall Street Journal and The New York
Times are already taking advantage of. The more that
someone learns about how web advertising works, the
more that he or she is motivated to get protected.
But if you don't talk to your readers about tracking protection, who will?
A lot of people are getting caught up today in
publisher-hostile schemes such as adblockers with paid
whitelisting, or adblockers that come with malware
If you don't recommend a publisher-friendly protection
tool or setting, they'll get a bad one from somewhere
Advertising done right can be a growth spiral of
growth spiral of economic growth, reputation building,
and creation of cultural works. It’s one of the most
powerful forces to produce news, entertainment goods,
fiction. Let's fix it.
One of the problems with a bug futures
is: where do you get the initial investment, or
"stake", for a developer who plans to take on a
In order to buy the FIXED side of a contract and
make a profit when it matures, the developer needs to
invest some cryptocurrency. In a bug futures market,
it takes money to make money.
One possible solution is to use personal tokens,
such as the new Evancoin.
Evancoin is backed by hours of work
performed by an individual (yes, his name is
If I believe that n hours of work from Evan are
likely to increase the probability of a Bugmark-traded
bug getting fixed, and my expected gain is greater
than _n * (current price of Evancoin)_, then I can
buy the FIXED side of the Bugmark contract
redeem n Evancoin for work from Evan on the bug
sell my Bugmark position at a profit, or wait for it to mature.
Evan is not required to accept cryptocurrency exchange
rate risk, and does not have to provide the "stake"
himself. It's the opposite—he has already
sold the Evancoin on an exchange. Of course, he has
an incentive to make as much progress on the bug
as possible, in order to support the future price
If Evan is working on the bug I selected, he would
also know that he's doing work that is likely to
move the price of the Bugmark contract. So he can
use some of the proceeds from his Evancoin sale to
buy additional FIXED on Bugmark, and take a profit
when I do.
Evan's skills tends to improve, and my understanding
of which tasks would be a profitable use of Evan's
time will tend to increase the more Evancoin I redeem.
So the value of Evancoin to me is likely to continue
rising. Therefore I am probably going to do best if
I accumulate Evancoin in advance of identifying good
bugs for Evan to work on.
Brainfood and Mozilla’s Open Innovation Team Kick Off Text Classification Open Source Experiment
Mozilla’s Open Innovation team is beginning a
new effort to understand more about motivations and
rewards for open source collaboration. Our goal is
to expand the number of people for whom open source
collaboration is a rewarding activity.
An interesting question is: While the server side
benefits from opportunities to work collaboratively,
can we explore them further on the client side, beyond
browser features and their add-on ecosystems? User
interest in “filter bubbles” gives us an
opportunity to find out. The new FilterBubbler project
provides a platform that helps users experiment with
and explore what kind of text they’re seeing on the
web. FilterBubbler lets you collaboratively “tag”
pages with descriptive labels and then analyze any
page you visit to see how similar it is to pages you
have already classified.
You could classify content by age or reading-level
rating, category like “current events” or
“fishing”, or even how much you trust the source
like “trustworthy” or “urban legend”. The
system doesn’t have any bias and it doesn’t
limit the number of tags you apply. Once you build
up a set of classifications you can visit any page
and the system will show you which classification
has the closest statistical match. Just as a web site
maintainer develops a general view of the technologies
and communities of practice required to make a web
site, we will use filter bubble building and sharing
to help build client-side understanding.
The project aims to reach users who are motivated
to understand and maybe change their information
environment. Who want to transform their own
“bubble” space and participate in collaborative
work, but do not have add-on development skills.
Can the browser help users develop better
understanding and control of their media
environments? Can we emulate the path to contribution
that server-side web development has? Please visit
the project and help us find out. FilterBubbler
can serve as a jumping off point for all kinds of
specific applications that can be built on top of
its techniques. Ratings systems, content suggestion,
fact checking and many other areas of interest can all
use the classifiers and corpora that the FilterBubbler
users will be able to generate. We’ll measure our
success by looking at user participation in filter
bubble data sharing, and by how our work gets adapted
and built on by other software projects.
When you release open source software, you have this egalitarian idea that you’re making it available to people who can really use it, who can then built on it to make amazing things....While this is a fine position to take, consider who has the most resources to build on top of a project that requires development. With most licenses, you’re issuing a free pass to corporations and other wealthy organizations, while providing no resources to those needy users. OpenSSL, which every major internet company depends on, was until recently receiving just $2,000 a year in donations, with the principal author in financial difficulty.
This is a good example of one of the really
interesting problems of working in an immature
industry. We have a similar problem in web
advertising. We're over-rewarding the ability to
collect numbers that show the effectiveness of a
marketing project, while under-rewarding the ability
to build brand reputation. Web ads also have an
opportunity to fix incentives. We don't have
our incentives hooked up right yet.
Why does open source have some bugs that stay open longer than careers do?
Why do people have the I've been coding to create lots of value for big companies for years and I'm still broke problem?
Why is the meritocracy of open source even more biased than other technical and collaborative fields? (Are we at the bottom of the standings?) Why are we walking away from that many potential contributors?
It is to the benefit of software companies and programmers to claim that software as we know it is the state of nature. They can do stupid things, things we know will result in software vulnerabilities, and they suffer no consequences because people don’t know that software could be well-written. Often this ignorance includes developers themselves. We’ve also been conditioned to believe that software rots as fast as fruit. That if we waited for something, and paid more, it would still stop working in six months and we’d have to buy something new. The cruel irony of this is that despite being pushed to run out and buy the latest piece of software and the latest hardware to run it, our infrastructure is often running on horribly configured systems with crap code that can’t or won’t ever be updated or made secure.
We have two possible futures.
People finally get tired of software's
boyish antics lethal
irresponsibility, and impose a
regulatory regime. Rent-seekers rejoice. Software
innovation as we know it ceases, and we
get something like the pre-breakup Bell
System—you have to be an insider to build
and deploy anything that reaches real people.
The software scene outgrows the "disclaimer of
implied warranty" level of quality, on its own.
How do we get to the second one? One approach is
to use market mechanisms to help quantify software
risk, then enable users with a preference for high
quality and developers with a preference for high
quality to interact directly, not through the filter
of software companies that win by releasing early at
a low quality level.
There is an opportunity here for the kinds of
companies that are now doing open source license
analysis. Right now they're analyzing relatively few
files in a project—the licenses and copyrights.
A tool will go through your software stack, and
hooray, you don't have anything that depends on
something with a consistent license, or on a license
that would look bad to the people you want to see
your company to.
What if that same tool would give you a better
quality number for your stack, based on walking your
dependency tree and looking for weak points based on
One important reason is that black or gray hat
security researchers are likely to have extreme
confidentiality requirements, especially when trading
on knowledge from a co-conspirator who may not be
aware of the trade. (A possible positive externality
win from bug futures markets is the potential
to reduce the trustworthiness of underground
vulnerability markets, driving marginal vuln
transactions to the legit market.)
What to do about different kinds of user data interchange:
Data collected without permission
Data collected with permission
Build tools and norms to reduce the amount of reliable data that is available without
Develop and test new tools and norms that enable people to share data that they choose to share.
Report on and show errors in low-quality data that was collected without permission.
Offer users incentives and tools that help them choose to share accurate data and correct errors in voluntarily shared data.
Most people who want data about other people still
prefer data that's collected without permission, and
collaboration is something that they'll settle for.
So most voluntary user data sharing efforts will need
a defense side as well. Freedom-loving technologists
have to help people reduce the amount of data that
they allow to be taken from them without permission
in order for data listen to people about sharing data.
(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)
Setting tracking protection defaults for a browser is
hard. Some activities that the browser might detect
as third-party tracking are actually third-party
services such as single sign-on—so when the
browser sets too high of a level of protection it
can break something that the user expects to work.
Meanwhile, new research from Pagefair
that The very large majority (81%) of respondents
said they would not consent to having their behaviour
tracked by companies other than the website they are
visiting. A tracking protection policy that
leans too far in the other direction will also fail to
meet the user's expectations.
So you have to balance two kinds of complaints.
"your dumbass browser broke a site that was working before"
"your dumbass browser let that stupid site do stupid shit"
Maybe, though, if the browser can figure out which
sites the user trusts, you can keep the user happy
by taking a moderate tracking protection approach on
the trusted sites, and a more cautious approach on less
If the user has not interacted with example.com in
the last 30 days, example.com website data and
cookies are immediately purged and continue to be
purged if new data is added. However, if the user
interacts with example.com as the top domain, often
referred to as a first-party domain, Intelligent
Tracking Prevention considers it a signal that the
user is interested in the website and temporarily
adjusts its behavior
It might makes more sense to set the trust level,
and the browser's tracking protection defaults,
based on which site the user is on. Will users
want a working "Tweet® this story" button on
a news site they like, and a "Log in with Google"
feature on a SaaS site they use, but prefer to have
third-party stuff blocked on random sites that they
happen to click through to?
How should the browser calculate user trust level?
Sites with bookmarks would look trusted, or sites
where the user submits forms (especially something
that looks like an email address). More testing is
needed, and setting protection policies is still a
Because typical Facebook ads are shown only to finely
targeted subsets of users, the best way to understand
them is to have a variety of users cooperate to run a
client-side research tool. ProPublica developer Jeff
has written a WebExtension, that runs on Mozilla
Firefox and Google Chrome, to do just that. I asked
him how the development went.
Q: Who was involved in developing your WebExtension?
A: Just me. But I can't take credit for the
idea. I was at a conference in Germany
a few months ago with my colleague Julia
and we were talking with people who worked
at Spiegel about our work on the Machine Bias
We all thought it would be a good idea to look at
political ads on Facebook during the German election
cycle, given what little we knew about what happened
in the U.S. election last year.
Q: What documentation did you use, and what would
you recommend that people read to get started with
A: I think both Mozilla and Google's documentation
sites are great. I would say that the tooling for
Firefox is much better due to the web-ext tool. I'd
definitely start there (Getting started with
the next time around.
Basically, web-ext takes care of a great deal of the
fiddly bits of writing an extension—everything
from packaging to auto reloading the extension when
you edit the source code. It makes the development
process a lot more smooth.
Q: Did you develop in one browser first and then test in the other, or test in both as you went along?
A: I started out in Chrome, because most of the
users of our site use Chrome. But I started
using Firefox about halfway through because
of web-ext. After that, I sort of ping ponged
back and forth because I was using source maps and
handles those a bit differently. Mostly the extension
worked pretty seamlessly across both browsers. I had
to make a couple of changes but I think it took me a
few minutes to get it working in Firefox, which was
a pleasant surprise.
Q: What are you running as a back end service to collect ads submitted by the WebExtension?
A: We're running a Rust server that collects the ads and
uploads images to an S3 bucket. It is my first Rust
project, and it has some rough edges, but I'm pretty
much in love with Rust. It is pretty wonderful to know
that the server won't go down because of all the built
in type and memory safety in the language. We've open
sourced the project, I could use help if anyone wants
to contribute: Facebook Political Ad Collector on
Q: Can you see that the same user got a certain
set of ads, or are they all anonymized?
A: We strive to clean the ads of all identifying
information. So, we only collect the id of the ad,
and the targeting information that the advertiser
used. For example, people 18 to 44 who live in
Q: What are your next steps?
A: Well, I'm planning on publishing the ads we've
received on a web site, as well as a clean dataset
that researchers might be interested in. We also plan
to monitor the Austrian elections, and next year is
pretty big for the U.S. politically, so I've got my
work cut out for me.
Q: Facebook has refused to release some "dark"
political ads from the 2016 election in the USA. Will
your project make "dark" ads in Germany visible?
A: We've been running for about four days, and so far
we've collected 300 political ads in Germany. My hope
is we'll start seeing some of the more interesting
ones from fly by night groups. Political advertising
on sites like Facebook isn't regulated in either the
United States or Germany, so on some level just having
a repository of these ads is a public service.
Q: Your project reveals the "dark" possibly deceptive
ads in Chrome and Firefox but not on mobile
platforms. Will it drive deceptive advertising away
from desktop and toward mobile?
A: I'm not sure, that's a possibility. I can say that
Firefox on Android allows WebExtensions and I plan
on making sure this extension works there as well,
but we'll never be able to see what happens in the
native Facebook applications in any sort of large
scale and systematic way.
Q: Has anyone from Facebook offered to help with the project?
A: Nope, but if anyone wants to reach out, I would love
what's the difference between a futures market
on software bugs and an open source bounty system
connected to the issue tracker? In many simple cases
a bug futures market will function in a similar way,
but we predict that some qualities of the futures
market will make it work differently.
Open source bounty systems have extra transaction costs of assigning credit for a fix.
Open source bounty systems can incentivize contention over who can submit a complete fix, when we want to be able to incentivize partial work and meta work.
Incentivizing partial work and meta work (such as bug
triage) would be prohibitively expensive to manage
using bounties claimed by individuals, where each
claim must be accepted or rejected. The bug futures
concept addresses this with radical simplicity:
the owners of each side of the contract are tracked
completely separately from the reporter and assignee
of a bug in the bug tracker.
And bug futures contracts can be traded in advance of
expiration. Any work that you do that meaningfully
changes the probability of the bug getting fixed by
the contract closing date can move the price.
You might choose to buy the "fixed" side of the
contract, do some work that makes it look more
fixable, sell at a higher price. Bugmark might make
it practical to do "day trading" of small steps,
such as translating a bug report originally posted
in a language that the developers don't know, helping
a user submit a log file, or writing a failing test.
With the right market design, participants in
a bug futures market have the incentive to talk their
by sharing partial work and metadata.
Yes, user tracking is creepy, and yes, collecting user
information without permission is wrong. But read
on for what could be a better approach for sites that
can make a bigger difference.
First of all, Twitter is so far behind in their attempts
to do surveillance marketing that they're more funny
and heartening than ominous. If getting targeted by
one of the big players is like getting tracked down
by a pack of hunting dogs, then Twitter targeting is
like watching a puppy chew on your sock. Twitter has
me in their database as...
Owner of eight luxury cars and a motorcycle.
Medical doctor advising patients about eating High Fructose Corn Syrup.
Owner of prime urban real estate looking for financing to build a hotel.
Decision-maker for a city water system, looking to read up on the pros and cons of cast iron and concrete pipes.
Active in-market car shopper, making all decisions
based on superficial shit like whether the car has
Beats® brand speakers in the doors. (Hey, where am
I supposed to park car number 9?)
Advice from "me" as I appear on Twitter: As
your doctor, I advise you to cut out HFCS entirely
unless you're at a family thing where you should just
eat a little and not be an ass about it. When you're
in town, stay at my hotel, where the TV is a 4k
monitor on an arm that moves to make it usable from
the sit-stand desk, and the WiFi is fast and free.
No idea on the city water pipe thing though.
Funny wrong Twitter ad targeting
is one of my reliable Internet amusements for the day.
But that's not why I'm not especially concerned
with tagging quoted Tweets. Just doing that doesn't
protect this site's visitors from retargeting schemes
on other sites.
And every time someone clicks on a retargeted
ad from a local business on a social site
(probably Facebook, since more people spend more
time there) then that's 65 cents or whatever of
marketing money that could have gone to local news,
bus benches, Little League, or some other sustainable,
marketing project. (That's not
even counting the medium to heavy treason
that makes me really uncomfortable about seeing money
move in Facebook's direction.)
So, instead of messing with quoted Tweet tagging, I set up this script:
(If you are viewing this site from an unprotected
browser and still not seeing the warning, it means
that your browser has not yet visited enough
domains with the Aloodo script to detect that
you're trackable. Take a tracking protection
test to expose your
browser to more fake tracking, then try again.)
If the other side wants it hidden, then reveal it
Surveillance marketers want tracking to happen behind
the scenes, so make it obvious. If you have a browser
or privacy tool that you want to recommend, it's easy
to put in the link. Every retargeted ad impression
that's prevented from happening is more marketing
money to pay for ad-sponsored resources that users
really want. I know I can't get all the users of
this site perfectly protected from all surveillance
marketing everywhere, but hey, 65 cents is 65 cents.
Bob Hoffman's new book is out! Go click on this quoted Tweet, and do what it says.
what's the difference between a futures market on
software bugs and a prediction market? We don't know
how much a bug futures market will tend to act like
a prediction market, but here are a few guesses about
how it may turn out differently.
Prediction markets tend to have a relatively small
number of tradeable questions, with a large number
of market participants on each side of each question.
Each individual bug future is likely to have a small
number of participants, at least on the "fixed" side.
Prediction markets typically have participants
who are not in a position to influence
the outcome. For example, The Good Judgment
recruited regular people to trade on worldwide events.
Bug futures are designed to attract participants
who have special knowledge and ability to change
Prediction markets are designed for gathering
knowledge. Bug futures are for incentivizing
tasks. A well-designed bug futures market will
monetize haters by turning a "bet" that a
project will fail into a payment that makes it more
likely to succeed. If successful in this, the market will have this feature in
common with Alex Tabarrok's Dominant Assurance
Prediction markets often implement conditional
trading. Bug markets rely on the underlying bug
tracker to maintain the dependency relationships
among bugs, and trades on the market can reflect the
strength of the connections among bugs as seen by
What's the difference between spam and real advertising?
Advertising is a signal for attention bargain. People
pay attention to advertising that carries some
hard-to-fake information about the seller's intentions
in the market.
says,What seems undoubtedly true is that humans,
like peahens, attach significance to a piece of
communication in some way proportionally to the cost
of generating or transmitting it.
If I get spam email, that's clearly signal-free
because it costs practically nothing. If I see a
magazine ad, it carries signal because I know that
it cost money to place.
Today's web ads are more like spam, because they
can be finely targeted enough that no significant
advertiser resources stand behind the message I'm
looking at. (A bot might have even written the copy.)
People don't have to be experts in media buying
to gauge the relative costs of different ads, and
filter out the ones that are clearly micro-targeted
Our data, and data about us, is the crude that Facebook and Google extract, refine and sell to advertisers. This by itself would not be a Bad Thing if it were done with our clearly expressed (rather than merely implied) permission, and if we had our own valves to control personal data flows with scale across all the companies we deal with, rather than countless different valves, many worthless, buried in the settings pages of the Web’s personal data extraction systems, as well as in all the extractive mobile apps of the world.
Today's web advertising business is a hacking
contest. Whoever can build the best system
to take personal information from the user
wins, whether or not the user knows about it.
(And if you challenge adfraud and adtech hackers
to a hacking contest, you can expect to come in
As users get the tools to control who they share their
information with (and they don't want to leak it to
everyone) then the web advertising business has to
transform into a reputation contest. Whoever can
build the most trustworthy place for users to choose
to share their information wins.
This is why the IAB is freaking out about privacy
by the way. IAB member companies are winning
at hacking and failing at building reputation.
(I want to do a user focus group where we show
people a random IAB company's webinar, then
count how many participants ask for tracking
protection support afterward.) But regulations
are a sideshow. In the long run regulators
will support the activities that legit business
So Doc has an important point. We have a big
opportunity to rebuild important parts of the web
advertising stack, this time based on the assumption
that you only get user data if you can convince the
user, or at least convince the maintainers of the
user's trusted tools, that you will use the data in
a way that complies with that user's norms.
One good place to check is: how many of a site's
readers are set up with protetcion tools that make
them "invisible" to Google Analytics and Chartbeat?
And how many of the "users" who sites are making
decisions for are just bots? If you don't have good
answers for those, you get dumbassery like "pivot to
which is a polite expression for "make videos for
bots because video ad impressions are worth enough
money to get the best bot developers interested."
Yes, "pivot to video" is still a thing, even though
The good news here is that legit publishers, trying
to transform web advertising from a hacking game
into a reputation game, don't have to do a perfect
job right away. Incrementally make reputation-based,
user-permissioned advertising into a better and better
investment, while adfraud keeps making unpermissioned
tracking into a worse and worse investment. Then wait
for some ambitious marketer (and marketers are always
looking for a new angle to reinvent Marketing) to
discover the opportunity and take credit for it.
User privacy is at risk from both hackers and lawyers.
Right now, lawyers are better at attacking lists,
and hackers are better at modifying tracker behavior
to get around protections.
The more I think about it, the more that I think it's
counterproductive to try to come up with one grand
unified set of protection rules or cookie policies
Spam filters don't submit their scoring rules to
ANSI—spammers would just work around them.
Search engines don't standardize and publish their
algorithms, because gray hat SEOs would just use
the standard to make useless word salad pages that
And different people have different needs.
If you're a customer service rep at an HERBAL
ENERGY SUPPLEMENTS company, you need a spam filter
that can adjust for your real mail. And any user
of a site that has problems with list-based tracking
will need to have the browser adjust, and rely more
on cleaning up third-party state after a session
instead of blocking outright.
Does your company intranet become unusable if
you fail to accept third-party tracking that
comes from an internal domain that your employer
acquired and still has some services running on?
Browser developers can't decide up front, so the
browser will need to adjust. Every change breaks
That means the browser has to work to help the user
pick a working set of protection methods and rules.
This will need to include both list-based protection
and monitoring tracking behavior, like Privacy
hackers and lawyers are good at getting around
Limit data sent to third-party sites
Apple Safari does this, so it's
likely to get easier to do cookie double
without breaking sites.
Scramble or delete unsafe data
If a tracking cookie or other identifier
does get through, delete or scramble it on
leaving the site or later, as the Self-Destructing
extension does. This could be a good backup for
when the browser "learns" that a user needs some
third-party state to do something like a shopping
cart or comment form, but then doesn't want the info
to be used for "ads that follow me around" later.
All of the conversations on the newspaper side have been focused on how can we join the advertising technology ecosystem. For example, how can a daily newspaper site in Bismarck, North Dakota deliver targeted advertising to a higher-value soccer mom? And none of the newspapers them have considered the fact that when they join that ecosystem they are enabling spam sites, fraudulent sites – enabling those sites to get a higher CPM rate by parasitically riding on the data collected from the higher-value newspaper sites.
The field of Search Engine Optimization has white
hat SEO, black hat SEO, and gray hat SEO.
White hat SEO helps a user get a better
search result, and complies with search engine
policies. Examples include accurately using the
same words that users search on, and getting honest
Black hat SEO is clearly against search engine
policies. Link farming, keyword stuffing, cloaking,
and a zillion other schemes. If they see you doing it,
your site gets penalized in search results.
Gray hat SEO is everything that doesn't help the user
get a better search result, but technically doesn't
violate a search engine policy.
Most SEO experts advise you not to put a lot of time
and effort into gray hat, because eventually the
search engines will notice your gray hat scheme and
start penalizing sites that do it. Gray hat is just
stuff that's going to be black hat when the search
engines figure it out.
This scheme seems to be intended to get around
existing third-party cookie protection, which is
turned on by default in Apple Safari and available
in other browsers.
But how long will it work?
Maybe the browser of the future won't run a "kangaroo
cookie court" but will ship with a built-in "kangaroo
law school" so that each copy of the browser will
develop its own local "courts" and its own local
"case law" based on the user's choices. It will
become harder to predict how long any single gray
hat adtech scheme will continue working.
In the big picture: in order to sell advertising you
need to give the advertiser some credible information
on who the audience is. Since the "browser wars"
of the 1990s, most browsers have been bad at
protecting personal information about the user,
so web advertising has become a game where a whole
bunch of companies compete to covertly capture as
much user info as they can.
Today, browsers are getting better at implementing
people's preferences about sharing their
information. The result is a change in the rules of
the game. Investment in taking people's personal
info is becoming less rewarding, as browsers compete
to reflect people's preferences. (That patent
will be irrelevant thanks to browser updates
long before it expires.)
Adfraud is the other half of this story. Fraudbots
are getting smarter at creating human-looking
ad impressions just as humans are getting better
protected. If you think that a web publisher's
response to harder-to-detect bots, viewing more
high-CPM video ads, should be "pivot to video!!1!!" I
don't know if I can help you.
And investments in
building sites and brands that are trustworthy
enough for people to want to share their
information will tend to become more rewarding.
(This shift naturally leads to complaints
from people who are used to winning the old
but will probably be better for customers who want to
use trustworthy brands and for people who want to earn
money by making ad-supported news and cultural works.)
As far as I know, there are three ways to match an
ad to a user.
User intent: Show an ad based on what the user is
searching for. Old-school version: the Yellow Pages.
Context: Show an ad based on where the user is, or
what the user is interested in. Old-school versions:
highway billboards (geographic context), specialized
magazines (interest context).
User identity: Show an ad based on who the
user is. Old-school version: direct mail.
Most online advertising is matched to the user
based on a mix of all three. And different
players have different pieces of the
action for each one. For user intent,
search engines are the gatekeepers. The other
winners from matching ads to users by intent are browsers and mobile platforms, who get
to set their default search engine. Advertising based
on context rewards the owners of reputations
for producing high-quality news, information,
and cultural works. Finally, user identity
now has a whole Lumascape
of vendors in a variety of
categories, all offering to help identify users in some
way. (the Lumascape is rapidly consolidating, but
that's another story.)
Few of the web ads that you
might see today are matched to you purely based on
one of the three methods. Investments in all three
tend to shift as the available technology, and the
prevailing norms and laws, change.
The basic functionality of the internet, which is built on data exchanges between a user’s computer and publishers’ servers, can no longer be used for the delivery of advertising unless the consumer agrees to receive the ads – but the publisher must deliver content to that consumer regardless.
This doesn't look accurate. I don't know of any
proposal that would require publishers to serve
users who block ads entirely. What Rothenberg
is really complaining about is that the proposed
regulation would limit the ability of sites and
ad intermediaries to match ads to users based on
user identity, forcing them to rely on user
intent and context. If users choose to block
ads delivered from ad servers that use their personal
data without permission, then sites won't be able to
refuse to serve them the content, but will be able to
run ads that are relevant to the content of the site.
As far as I can tell, sites would still be able to
pop a "turn off your ad blocker" message in place of
a news story if the user was blocking an ad placed
purely by context, magazine style.
Privacy regulation is not so much an attack on the
basic functionality of the Internet, as it
is a shift that lowers the return on investment on
knowing who the user is, and drives up the return on
investment on providing search results and content.
That's a big change in who gets paid: more money
for search and for trustworthy content brands,
and less for adtech intermediaries that depend on
Advertising: a fair deal for the user?
That depends. Search advertising is clearly the result
of a user choice. The user chooses to view ads that
come with search results, as part of choosing to
do a search. As long as the ads are marked as ads,
it's pretty obvious what is happening.
The same goes for ads placed in context.
The advertiser trades economic signal, in
the form of costly support of an ad-supported
resource, for the user's attention. This is
common in magazine and broadcast advertising,
and when you use a site with one of the (rare)
pure in-context ad platforms such as Project
it works about the same way.
The place where things start to get
problematic is ads based on user identity,
placed by tracking users from site to site.
The more that users learn how their data is
used, the less tracking they tend to want. In one
66% of adult Americans said they do not want
marketers to tailor advertisements to their
interests, and when the researchers explained
how ad targeting works, the percentage went
If users, on average, dislike tracking enough that
sites choose to conceal it, then that's pretty good
evidence that sites should probably ask for permission
to do it. Whether this opt-in should be enforced by
law, technology, or both is left as an exercise for
So what happens if, thanks to new regulations,
technical improvements in browsers, or both,
cross-site tracking becomes harder? Rothenberg
insists that this transformation would end
ad-supported sites, but the real effects
would be more complex. Ad-supported sites are
already getting a remarkably lousy share of
ad budgets. “The supply chain’s complexity
and opacity net digital advertisers as little
as 30 cents to 40 cents of working media for
every dollar spent,” ANA CEO Bob Liodice
high-reputation sites tends to be a better
than using highly intermediated, fraud-prone, stacks
of user tracking to try to chase good users to cheap
sites. But crap ad inventory, including fraudulent
and brand-unsafe stuff, persists. The crap only has
market value because of user tracking, and it drives
down the value of legit ads. If browser improvements
or regulation make knowledge of user identity
rarer, the crap tends to leave the market and the
value of user intent and context go up.
Rothenberg speaks for today's adtech,
which despite all its acronyms and Big
Data jive, is based on a pretty boring business
find a user on a legit site, covertly follow the
user to a crappy site where the ads are cheaper,
sell an ad impression there, profit. Of course he's
entitled to make the case for enabling IAB members to
continue to collect their "adtech tax." But moving ad
budgets from one set of players to another doesn't end
ad-supported sites, because marketers adjust. That's
what they do. There's always something new in
marketing, and budgets move around. What happens
when privacy regulations shift the incentives,
and make more of advertising have to depend on
trustworthy content? That's the real question here.
Moral values in society are
collapsing? Really? Elizabeth Stoker Bruenig
The baseline moral values of poor people do not,
in fact, differ that much from those of the rich.
(read the whole thing).
Unfortunately, if you read the fine print,
it's more complicated than that. Any market
economy depends on establishing trust between
people who trade with each other. Tim Harford
Being able to trust people might seem like a pleasant luxury, but economists are starting to believe that it’s rather more important than that. Trust is about more than whether you can leave your house unlocked; it is responsible for the difference between the richest countries and the poorest.
Somehow, over thousands of years, business
people have built up a set of norms about
high-status and low-status business activities.
Craftsmanship, consistent supply of high-quality
staple goods, and construction of noteworthy
projects are high-status activities. Usury and
are examples of low-status activities. (You make
your money in quarters, gambling with retired people?
You lend people $100 until Friday at a 300% interest
rate? No club invitation for you.)
Somehow, though, that is now changing in the USA.
Those who earn money through deception now have seats
at the same table as legitimate business. Maybe
it started with the shift into "consumer
by respectable banks. But why were high-status
bankers willing to play loan shark to begin
with? Something had to have been building, culturally.
(It started too early to blame the Baby Boomers.)
We tend to blame information technology companies
for complex, one-sided Terms of Service and EULAs,
but it's not so much a tech trend as it is a general
business culture trend. It shows up in tech fast,
because rapid technology change provides cover and
concealment for simultaneous changes in business
terms. US business was rapidly losing its connection
to basic norms when it was still moving at the speed
of FedEx and fax. (You can't say, all of a sudden,
"car crashes in existing fast-food drive-thrus are
subject to arbitration in Unfreedonia" but you can
stick that kind of term into a new service's ToS.)
There's some kind of relativistic effect going on.
Tech bros just seem like bigger douchebags because
they're moving faster.
Regulation isn't the answer. We have a system in
which business people can hire lobbyists to buy the
laws and regulations we want. The question is whether
we're going to use our regulatory capture powers
in a shortsighted, society-eroding hustler way,
or in a conservative way. Economic conservatism
means not just limiting centralized state control
of capital, but preserving the balance among all
the long-standing stewards of capital, including
households, municipalities, and religious and
educational institutions. Economic conservatism and
radical free-marketism are fundamentally different.
People blame trashy media for the erosion of norms
among the poor, so let's borrow that explanation for
the erosion of norms among the rich as well. Maybe
our problem with business norms results from the
globablization and sensationalism of business
media. Joe CEO isn't just the most important corporate
leader of Mt. Rose, MN, any more—on a global
scale he's just another broke-ass hustler.
One of the common oversimplifications in discussing
open-source software licenses is that copyleft
licenses are "idealistic" while non-copyleft licenses
are "pragmatic." But that's not all there is to it.
Instead of treating the downstrem
developer's employer as a hive mind,
it can be more producive to assume good
on the part of the individual who
intends to contribute to the software,
and think about the license from the
point of view of a real person.
Releasing source for a derivative work costs time
and money. The well-intentioned "downstream"
contributor wants his or her organization to make
those investments, but he or she has to make a case
for them. The presence of copyleft helps steer
the decision in the right direction. Jane Hacker
at an organization planning to release a derivative
work can say, matter-of-factly, "we need to comply
with the upstream license" if copyleft is involved.
The organization is then more likely to do the right
thing. There are always violations, but the license
is a nudge in the right direction.
(The extreme case is university licensing offices.
University-owned software patents can exclude a
graduate student from his or her own project when
the student leaves the university, unless he or she
had the foresight to build it as a derivative work
of something under copyleft.)
Copyleft isn't a magic commons-building tool, and it
isn't right for every situation. But it can be enough
to push an organization over the line. (One place
where I worked had to a do a source release for one
dependency licensed under GPLv2, and it turned out to
be easist to just build one big source code release
with all the dependencies in it, and offer that.)
Least surprising news story ever:
The Campaign Against Facebook And
Google's Ad "Duopoly" Is Going
Independent online publishers can't beat the
big surveillance marketing companies at surveillance
marketing? How about they try to beat Amazon and
Microsoft at cloud services, or Apple and Lenovo
at laptop computers? There are possible winning
strategies for web publishers, but doing the same as
the incumbents with less money and less data is not
one of them.
Meanwhile, from an investor point of view: It’s the
Biggest Scandal in Tech (and no one’s talking about
Missing the best investment advice: get out of any
B-list adtech company that is at risk of getting
forced into a low-value acquisition by a sustained
fraud story. Or short it and research the fraud
Apple’s Upcoming Safari Changes Will Shake Up Ad
Not surprisingly, Facebook and Amazon are the big
winners in this change. Most of their users come every
day or at least every week. And even the mobile users
click on links often, which, on Facebook, takes them
to a browser. These companies will also be able to
buy ad inventory on Safari at lower prices because
many of the high-dollar bidders will go away.
A good start by Apple, but other browsers can do
better. (Every click on a Facebook ad from a local
business is $0.65 of marketing money that's not going
to local news, Little League sponsorships, and other
A lot of privacy people these days sound like a little
kid arguing with a sibling. You're going to be in
big trouble when Dad gets home!
Dad, here, is the European Union,
who's going to put the General Data Protection
foot down, and then, oh, boy, those naughty
surveillance marketers are going to catch it, and
wish that they had been listening to us about privacy
The problem is that perfectly normal businesses are using
GDPR-violating sneaky tracking pixels
and other surveillance marketing as part of their daily
As the GDPR deadline approaches, surveillance
marketers in Europe are going to sigh and
painstakingly explain to European politicians
that of course this GDPR thing isn't going
to work. "You see, politicians, it's an example
of political overreach that completely conflicts
with technical reality." European surveillance
marketers will use the same kind of language
about GDPR that the freedom-loving
side used when we talked about the proposed
It's just going to Break the Internet! People will
lose their jobs!
The result is predictable. GDPR will be delayed,
festooned with exceptions, or both, and the hoped-for
top-down solution to privacy problems will not come.
There's no shortcut. We'll only get a replacement for
surveillance marketing when we build the tools, the
networks, the business processes, the customer/voter
norms, and then the political power.
Proctor & Gamble makes products that help you comply
with widely held cleanliness norms.
Digital ads are micro-targeted to you as an individual.
That's the worst possible brand/medium fit. If you
don't know that the people who expect you to keep
your house or body clean are going to be aware of
the same product, how do you know whether to buy it?
Just thinking about approaches to incentivizing
production of information goods, and where futures
markets might fit in.
Article 1, Section 8, of the US Constitution still covers this one best.
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
We know about the problems with this one. It
encourages all kinds of rent-seeking and
freedom-menacing behavior by the holders of
property interests in information. And the
transaction costs are too high to incentivize the
production of some useful kinds of information.
Commoditize the complement
Joel Spolsky explained it best, in Strategy Letter
Smart companies try to commoditize their
products’ complements. (See also: the list
of business models in the Some Easily Rebutted
Objections to GNU's Goals section of the GNU
This one has been shown to work for some categories
of information goods but not others. (We have Free
world-class browsers and OS kernels because search
engines and hardware are complements. We don't have
free world-class software in categories such as CAD.)
Release a free information good as a way to signal
competence in performing a service, or at least a
large investment by the author in persuading others
that the author is competent. Works at the level
of the individual labor market and in consulting.
Don't know if this works in other areas.
Game and market mechanisms
With "gamified crowdsourcing" you can earn play
rewards for very low transaction costs, and contribute
very small tasks.
Higher transaction costs are associated with "crowdfunding" which
sounds similar but requires more collaboration and administration.
In the middle, between crowdsourcing and crowdfunding,
is a niche for a mechanism with lower transaction
costs than crowdfunding but more rewards than
By using the existing bug tracker to
resolve contracts, a bug futures market keeps
transaction costs low. By connecting to an existing
cryptocurrency, a bug futures market enables a kind
of reward that is more liquid, and transferrable
We don't know how wide the bug futures niche is. Is it a tiny space between
increasingly complex tasks that can be resolved by crowdsourcing and
increasingly finer-grained crowdfunding campaigns?
Or are bug futures capable of achieving low enough
transaction costs to be an attractive incentivization
mechanism for a lot of tasks that go into a variety of
I thought it would be fun to try Twitter ads, and, not
surprisingly, I started getting fake followers pretty
quickly after I started a Twitter follower campaign.
Since I'm paying nine cents a head for these
followers, I don't want to get ripped off.
So naturally I put in a support ticket to Twitter,
and just heard back.
Thanks for writing in about the quality of followers and engagements. One of the advantages of the Twitter Ads platform is that any RTs of your promoted ads are sent to the retweeting account's followers as an organic tweet. Any engagements that result are not charged, however followers gained may not align with the original campaign's targeting criteria. These earned followers or engagements do show in the campaign dashboard and are used to calculate cost per engagement, however you are not charged for them directly.
Twitter also passes all promoted engagements through a filtering mechanism to avoid charging advertisers for any low-quality or invalid engagements. These filters run on a set schedule so the engagements may show in the campaign dashboard, but will be deducted from the amount outstanding and will not be charged to your credit card.
If you have any further questions, please don't hesitate to reply.
That's pretty dense San Francisco speak, so let
me see if I can translate to the equivalent for a
Hey, what are these rat turds doing in my raisin bran?
Thanks for writing in about the quality of your raisin
bran eating experience. One of the advantages of the
raisin bran platform is that during the production
process, your raisin bran is made available to our
rodent partners as an organic asset.
I paid for raisin bran, so why are you selling me raisin-plus-rat-turds bran?
Any ingredients that result from rodent engagement
are not charged, however ingredients gained may not
align with your original raisin-eating criteria.
Can I have my money back?
We pass all raisin bran sales through a filtering
mechanism to avoid charging you for invalid
ingredients. The total weight of the product, as printed on the
box, includes these ingredients, but the weight of
invalid ingredients will be deducted from the amount
charged to your credit card.
So how can I tell which rat turds are "organic" so
I'm not paying for them, and which are the ones that
you just didn't catch and are charging me for?
Buying Twitter followers: Fiverr or Twitter?
On Fiverr, Twitter followers are about half a cent
each ($5/1000). On Twitter, I'm gettting followers
for about 9 cents each. The Twitter price is about
18x the Fiverr price.
But every follower that someone else buys on Fiverr
has to be "aged" and disguised in order to look
realistic enough not to get banned. The bot-herders
have to follow legit follower campaigns such as
mine and not just their paying customers.
(I call them "sleepers." They do all sorts of natural things (following suggested accounts, tweeting quotes) aging into "trusted" zone.)
If Twitter is selling those "follow" actions to
me for nine cents each, and the bot-herder is only
making half a cent, how is Twitter not making more
from bogus Twitter followers than the bot-herders are?
If you're verified on Twitter, you may not be
seeing how much of a shitshow their ad business
is. Maybe the're going to have to sell Twitter to
sooner than I thought.
Bryan Alexander has a good description
of an "open web" reading pipeline in
I defy the world and go back to
I'm all for the open web, but 40 separate folders for
400 feeds? That would drive me nuts. I'm a lumper,
not a splitter. I have one folder for 12,387 feeds.
My chosen way to use RSS (and one of the
great things about RSS is you can choose
UX independently of information sources) is a
"scored river". Something like Dave Winer's River of
concept, that you can navigate by just scrolling,
but not exactly a river of news.
with full text if available, but without
images. I can click through if I want the images.
items grouped by score, not feed. (Scores assigned
managed by a dirt-simple algorithm where a feed
"invests" a percentage of its points in every link,
and the investments pay out in a higher score for
that feed if the user likes a link.)
I also put the byline at the bottom of each item.
Anyway, one thing I have found out about manipulating
my own filter bubble is that linklog feeds and
blogrolls are great inputs. So here's a linklog
from the live site, which annoys everyone except me.)
The Al Capone theory of sexual harassmentInitially, the connection eluded us: why would the same person who made unwanted sexual advances also fake expense reports, plagiarize, or take credit for other people’s work?
Jon Tennant - The Cost of KnowledgeBut there’s something much more sinister to consider; recently a group of researchers saw fit to publish Ebola research in a ‘glamour magazine’ behind a paywall; they cared more about brand association than the content. This could be life-saving research, why did they not at least educate themselves on the preprint procedure....
2014:When there's no other dude in the car, the cost of taking an Uber anywhere becomes cheaper than owning a vehicle. So the magic there is, you basically bring the cost below the cost of ownership for everybody, and then car ownership goes away.
2018 (?): When there's no other dude in the fund, the cost of financing innovation anywhere becomes cheaper than owning a portfolio of public company stock. So the magic there is, you basically bring the transaction costs of venture capital below the cost of public company ownership for everybody, and then public companies go away.
Could be a thing for software/service companies faster than we might think. Futures contracts on bugs→equity crowdfunding and pre-sales of tokens→bot-managed follow-on fund for large investors.
Here's a probably stupid idea: give bots the right
to accept proposed changes to a software project.
Can automation encourage less burnout-provoking
A set of bots could interact in interesting ways.
Regression-test-bot: If a change only adds a
test, applies cleanly to both the current version
and to a previous version, and the previous version
passses the test, accept it, even if the test fails
for the current version.
Harmless-change-bot: If a change is below a
certain size, does not modify existing tests, and
all tests (including any new ones) pass, accept it.
Revert-bot: If any tests are failing on the
current version, and have been failing for more than
a certain amount of time, revert back to a version
Would more people write regression tests for their
issues if they knew that a bot would accept them?
Or say that someone makes a bad change but gets
it past harmless-change-bot because no existing
test covers it. No lengthy argument needed. Write
a regression test and let regression-test-bot and
revert-bot team up to take care of the problem. In
general, move contributor energy away from arguing
with people and toward test writing, and reduce the
size of the maintainer's to-do list.
This is not new. Right-wing shitlords, at least the
best of them, are the masters of database marketing.
They absolutely _kill it_, and they have been
ever since Marketing as we know it became a thing.
Some good examples:
All the creepy surveillance marketing stuff they're
doing today is just another set of tools in an
expanding core competency.
Every once in a while you get an exception. The
environmental movement became a direct mail
operation in response to Interior Secretary James
who alarmed environmentalists enough that
organizations could reliably fundraise with direct
mail copy quoting from Watt's latest speech. And the
Democrats tried that "Organizing for America" thing
for a little while, but, man, their heart just wasn't
in it. They dropped it like a Moodle site during
summer vacation. Somehow, the creepier the marketing,
the more it skews "red". The more creativity involved,
the more it skews "blue" (using the USA meanings of
those colors.) When we make decisions about how much
user surveillance we're going to allow on a platform,
we're making a political decision.
News sites want to go to Congress, to get permission
to play for third place in their own business?
You want permission to bring fewer resources and
less experience to a surveillance marketing game
that the Internet companies are already losing?
We know the qualities of a medium that you win by
being creepier, and we know the qualities of a medium
that you can win with reputation and creativity.
Why waste time and money asking Congress for the
opportunity to lose, when you could change the game
Maybe achieving balance in political views depends
on achieving balance in business model. Instead of
buying in to the surveillance marketing model 100%,
and handing an advantage to one side, maybe news sites
should help users control what data they share
in order to balance competing political interests.
Given this, it appears that an open source venture (a company that can scale to millions of worker/owners creating a new economic ecosystem) that builds massive human curated databases and decentralizes the processing load of training these AIs could become extremely competitive.
But what if the economic ecosystem could exist
without the venture? Instead of trying to build a
virtual company with millions of workers/owners,
build a market economy with millions of participants
in tens of thousands of projects and tasks? All of
this stuff scales technically much better than it
scales organizationally—you could still be
part of a large organization or movement while only
participating directly on a small set of issues at
any one time. Instead of holding equity in a large
organization with all its political risk, you could
hold a portfolio of positions in areas where you have
enough knowledge to be comfortable.
Robb's opportunity is in training AIs, not in writing
code. The "oracle" for resolving AI-training or
dataset-building contracts would have to be different,
but the futures market could be the same.
The cheating project problem
Why would you invest in a futures contract on bug
outcomes when the project maintainer controls the
And what about employees who are incentivized from
both sides: paid to fix a bug but able to buy futures
contracts (anonymously) that will let them make more
on the market by leaving it open?
In order for the market to function, the total
reputation of the project and contributors
must be high enough that outside participants
believe that developers are more motivated
to maintain that reputation than to "take a
on a bug.
That implies that there is some kind of relationship
between the total "reputation capital" of a project
and the maximum market value of all the futures
contracts on it.
Open source metrics
To put that another way, there must be some
relationship between the market value of futures
contracts on a project and the maximum reputation
value of the project. So that could be a proxy for
a difficult-to-measure concept such as "open source
Open source journalism
Hey, tickers to put into stories! Sparklines! All
the charts and stuff that finance and sports reporters
can build stories around!
This paper presents the largest study to date on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. Surprisingly, our results show that women's contributions tend to be accepted more often than men's. However, women's acceptance rates are higher only when they are not identifiable as women.
For outsiders, women coders who use gender-neutral profiles get their changes accepted 2.8% more of the time than men with gender-neutral profiles, but when their gender is obvious, they get their changes accepted 0.8% less of the time.
The experiment, launching this month,
will help reviewers who want to try breaking
habits of unconscious bias (whether by gender or
insider/outsider status) by concealing the name and
email adddress of a code author during a review on
Bugzilla. You'll be able to un-hide the information
before submitting a review, if you want, in order
to add a personal touch, such as welcoming a new
The extension will "cc" one of two special accounts
on a bug, to indicate if the review was done partly
or fully blind. This lets us measure its impact without
having to make back-end changes to Bugzilla.
(Yes, browser add-ons let you experiment
with changing a user's experience of
a site without changing production web
applications or content sites. Bonus link:
Adfraud is a big problem, and we keep seeing two
basic approaches to it.
Flight to quality: Run ads only on trustworthy
sites. Brands are now playing the fraud game
with the "reputation coprocessors" of the
audience's brains on the brand's side. (Flight
to quality doesn't mean just advertise on the
same major media sites as everyone else—it
can scale downward with, for example, the Project
model that lets you choose sites that are "brand safe"
Increased surveillance: Try to fight adfraud by
continuing to play the game of trying to get big-money
impressions from the cheapest possible site, but throw
more tracking at the problem. Biggest example of this
is to move ad money to locked-down mobile platforms
and away from the web.
Anyway, I'm interested in and optimistic about
the results of the recent Mozilla/Caribou Digital
It turns out that USA-style adtech is harder to do
in countries where users are (1) less accurately
tracked and (2) equipped with blockers to avoid
bandwidth-sucking third-party ads. That's likely
to mean better prospects for ad-supported news and
cultural works, not worse. This report points
out the good news that the so-called adtech
tax is lower in developing countries—so
what kind of ad-supported businesses will be
enabled by lower "taxes" and "reinvention, not
of more magazine-like advertising?
Of course, working in those markets is going to be
hard for big US or European ad agencies that are
now used to solving problems by throwing creepy
tracking at them. But the low rate of adtech
taxation sounds like an opportunity for creative
local agencies and brands. Maybe the report should
have been called something like "The Global South
is Shitty-Adtech-Proof, so Brands Built Online There
Are Going to Come Eat Your Lunch."
Why would you want the added complexity of a
market where anyone can take either side of a
futures contract on the status of a software bug,
and not just offer to pay people to fix bugs like a
sensible person? IMHO it's worth trying not just
because of the promise of lower transaction costs
and more market liquidity (handwave) but because it
enables other kinds of transactions. A few more.
Partial work I want a feature, and buy the
"unfixed" side of a contract that I expect to lose.
A developer decides to fix it, does the work, and
posts a pull request that would close the bug.
But the maintainer is on vacation, leaving her
pull request hanging with a long comment thread.
Another developer is willing to take on the political
risk of merging the work, and buys out the original
Prediction/incentivization With the right market
design, a prediction that something won't happen
is the same as an incentive to make it happen.
If we make an attractive enough way for users
to hedge their exposure to lack of innovation,
we create a pool of wealth that can be captured
by innovators. (Related: dominant assurance
Bug triage Much valuable work on bugs is in
the form of modifying metadata: assigning a bug
to the correct subsystem, identifying dependency
relationships, cleaning up spam, and moving invalid
bugs into a support ticket tracker or forum.
This work is hard to reward, and infamously hard to
find volunteers for. An active futures market could
include both bots that trade bugs probabilistically
based on status and activity, and active bug triagers
who make small market gains from modifying metadata
in a way that makes them more likely to be resolved.
More on the third connection in Benkler’s
which was pretty general. This is just some notes
on more concrete examples of how new kinds of direct
connections between markets and peer production might
work in the future.
Smart contracts should make it possible to enable
these in a trustworthy, mostly decentralized, way.
Feature request I want emoji support on my blog,
so I file, or find, a wishlist bug on the open source
blog package I use: "Add emoji support." I then offer
to enter into a smart contract that will be worthless
to me if the bug is fixed on September 1, or give me
my money back if the bug is unfixed at that date.
A developer realizes that fixing the bug would be
easy, and wants to do it, so takes the other side
of the contract. The developer's side will expire
worthless if the bug is unfixed, and pay out if the
bug is fixed.
"Unfixed" results will probably include bugs that
are open, wontfix, invalid, or closed as duplicate
of a bug that is still open.
"Fixed" results will include bugs closed as fixed,
or any bug closed as a duplicate of a bug that is
closed as fixed.
If the developer fixes the bug, and its status changes
to fixed, then I lose money on the smart contract but
get the feature I want. If the bug status is still
unfixed, then I get my money back.
So far this is just one user paying one developer to
write a feature. Not especially exciting. There is
some interesting market design work to be done here,
though. How can the developer signal serious interest
in working on the bug, and get enough upside to be
meaningful, without taking too much risk in the event
the fix is not accepted on time?
Arbitrage I post the same offer, but another user
realizes that the blog project can only support emoji
if the template package that it depends on supports
them. That user becomes an arbitrageur: takes the
"fixed" side of my offer, and the "unfixed" side of
the "Add emoji support" bug in the template project.
As an end user, I don't have to know the dependency
relationship, and the market gives the arbitrageur
an incentive to collect information about multiple
dependent bugs into the best place to fix them.
Front-running Dudley Do-Right's open source
project has a bug in it, users are offering to buy the
"unfixed" side of the contract in order to incentivize
a fix, and a trader realizes that Dudley would be
unlikely to let the bug go unfixed. The trader takes
the "fixed" side of the contract before Dudley wakes
up. The deal means that the market gets information
on the likelihood of the bug being fixed, but the
developer doing the work does not profit from it.
This is a "picking up nickels in front of a
steamroller" trading strategy. The front-runner is
accepting the risk of Dudley burning out, writing a
long Medium piece on how open source is full of FAIL,
and never fixing a bug again.
Front-running game theory could be interesting. If
developers get sufficiently annoyed by front-running,
they could delay fixing certain bugs until after the
end of the relevant contracts. A credible threat to
do this might make front-runners get out of their
positions at a loss.
CVE prediction A user of a static analysis tool
finds a suspicious pattern in a section of a codebase,
but cannot identify a specific vulnerability. The
user offers to take one side of a smart contract
that will pay off if a vulnerability matching a
certain pattern is found. A software maintainer or
key user can take the other side of these contracts,
to encourage researchers to disclose information and
focus attention on specific areas of the codebase.
Security information leakage Ernie and Bert
discover a software vulnerability. Bert sells it to
foreign spies. Ernie wants to get a piece of the
action, too, but doesn't want Bert to know, so he
trades on a relevant CVE prediction. Neither Bert nor
the foreign spies know who is making the prediction,
but the market movement gives white-hat researchers
a clue on where the vulnerability can be found.
Open source metrics: Prices and volumes on bug
futures could turn out to be a more credible signal
of interest in a project than raw activity numbers.
It may be worth using a bot to trade on a project you
depend on, just to watch the market move. Likewise,
new open source metrics could provide useful trading
strategies. If sentiment analysis shows that a
project is melting down, offer to take the "unfixed"
side of the project's long-running bugs? (Of course,
this is the same market action that incentivizes
fixes, so betting that a project will fail is the
same thing as paying them not to. My brain hurts.)
What's an "oracle"?
The "oracle" is the software component that moves
information from the bug tracker to the smart
contracts system. Every smart contract has to be tied
to a given oracle that both sides trust to resolve
For CVE prediction, the oracle is responsible
for pattern matching on new CVEs, and feeding the
info into the smart contract system. As with all
of these, CVE prediction contracts are tied to a
Bots might have several roles.
Move investments out of duplicate bugs. (Take a "fixed"
position in the original and an "unfixed" position in the
duplicate, or vice versa.)
Make small investments in bugs that appear valid
based on project history and interactions by
Track activity across projects and social sites
to identify qualified bug fixers who are unlikely
to fix a bug within the time frame of a contract,
and take "unfixed" positions on bugs relevant
For companies: when a bug is mentioned in an
internal customer support ticketing system, buy
"unfixed" on that bug. Map confidential customer
needs to possible fixers.
Content Neutrality: Content blocking software
should focus on addressing potential user needs
(such as on performance, security, and privacy)
instead of blocking specific types of content
(such as advertising).
Transparency & Control: The content blocking
software should provide users with transparency and
meaningful controls over the needs it is attempting
Openness: Blocking should maintain a
level playing field and should block under the
same principles regardless of source of the
content. Publishers and other content providers
should be given ways to participate in an open Web
ecosystem, instead of being placed in a permanent
penalty box that closes off the Web to their
products and services.
[T]he police are the public and that the public are the police, the police being only members of the public who are paid to give full-time attention to duties which are incumbent on every citizen in the interests of community welfare and existence.
Web browser developers have similar responsibilities
to those of Peel's ideal police: to build a browser
to carry out the user's intent, or, when setting
defaults, to understand widely held user norms and
implement those, while giving users the affordances
to change the defaults if they choose.
The question now is how to apply content blocking
principles to today's web environment. Some qualities
of today's situation are:
Tracking protection often doesn't have to be
perfect, because adfraud. The browser can provide
some protection, and influence the market in a
positive direction, just by getting legit users
below the noise floor of fraudbots.
Tracking protection has the potential to intensify
a fingerprinting arms race that's already going on,
by forcing more adtech to rely on fingerprinting
in place of third-party cookies.
Fraud is bad, but not all anti-fraud is good.
Anti-fraud technologies that track users can create
the same security risks as other tracking—and
enable adtech to keep promising real eyeballs on
crappy sites. The "flight to quality" approach
to anti-fraud does not share these problems.
Adtech and adfraud can peek at Mozilla's homework,
but Mozilla can't see theirs. Open source projects
must rely on unpredictable users, not unpredictable
platform decisions, to create uncertainty.
Which suggests a few tactics—low-risk ways to
apply content blocking principles to address today's
Empower WebExtensions developers and users. Much
of the tracking protection and anti-fingerprinting
magic in Firefox is hidden behind preferences. This
makes a lot of sense because it enables developers
to integrate their work into the browser in parallel
with user testing, and enables Tor Browser to do
less patching. IMHO this work is also important
to enable users to choose their own balance between
privacy/security and breaking legacy sites.
Inform and nudge users who express an interest
in privacy. Some users care about privacy, but
don't have enough information about how protection
choices match up with their expectations. If
a user cares enough to turn on Do Not Track,
change cookie settings, or install an ad blocker,
then try suggesting a tracking protection setting
or tool. Don't assume that just because a user
has installed an ad blocker with deceptive privacy
that the user would not choose privacy if asked
Understand and report on adfraud. Adfraud
is more than just fake impressions and clicks.
New techniques include attribution fraud: taking
advantage of tracking to connect a bogus ad impression
to a real sale. The complexity of attribution models
makes this hard to track down. (Criteo and Steelhouse
settled a lawsuit about this before discovery could
A multi-billion-dollar industry is devoted to
spreading a story that minimizes adfraud, while
independent research hints at a complex and lucrative
adfraud scene. Remember how there were two Methbot
Methbot got a bogus block of IP addresses, and Methbot
circumvented some widely used anti-fraud scripts.
The ad networks dealt with the first one pretty
quickly, but the second is still a work in progress.
The more that Internet freedom lovers can help
marketers understand adfraud, and related problems
such as brand-unsafe ad placements, the more that
the content blocking story can be about users, legit
sites, and brands dealing with problem tracking,
and not just privacy nerds against all web business.
Since most software is sold with an “as is” license, meaning the company is not legally liable for any issues with it even on day one, it has not made much sense to spend the extra money and time required to make software more secure quickly.
The software business is still stuck on the kind of
licensing that might have made sense in the 8-bit
micro days, when "personal computer productivity"
was more aspirational than a real thing, and software
licenses were printed on the backs of floppy sleeves.
Today, software is part of products that do real
stuff, and it makes zero sense to ship a real product,
that people's safety or security depends on, with the
fine print "WE RESERVE THE RIGHT TO TOTALLY HALF-ASS
OUR JOBS" or in business-speak, "SELLER DISCLAIMS
THE IMPLIED WARRANTY OF MERCHANTABILITY."
But what about open source and
collaboration and science, and all that
stuff? Software can be both "product" and
Should there be a warranty on speech? If I dig up my
for re-running the make command when a
source file changes, and put it on the Internet,
should I be putting a warranty on it?
It seems that there are two kinds of software: some is
more product-like, and should have a grown-up warranty
on it like a real busines. And some software is more
speech-like, and should have ethical requirements like
a scientific paper, but not a product-like warranty.
What's the dividing line? Some ideas.
"productware is shipped as executables, freespeechware
is shipped as source code" Not going to work for
elevator_controller.php or a home router security
"productware is preinstalled, freespeechware is downloaded
separately" That doesn't make sense when even
implanted defibrillators can update over the net.
"productware is proprietary, freespeechware is open
source" Companies could put all the fragile stuff
in open source components, then use the DMCA and
CFAA to enable them to treat the whole compilation
Software companies are built to be good at getting
around rules. If a company can earn all its money in
faraway Dutch Sandwich Land and be conveniently too
broke to pay the IRS in the USA, then it's going to
be hard to make it grow up licensing-wise without
hurting other people first.
How about splitting out the legal advantages that the
government offers to software and extending some to
productware, others to freespeechware?
license may disclaim implied warranty
no anti-reverse-engineering clause in a freespeechware license is enforceable
freespeechware is not a "technological protection measure" under section 1201 of Title 17 of the United States Code (DMCA anticircumvention)
exploiting a flaw in freespeechware is never a violation of the Computer Fraud and Abuse Act
If the license allows it, a vendor may sell freespeechware, or a
derivative work of it, as productware. (This
could be as simple as following the You may
charge any price or no price for each copy that
you convey, and you may offer support or warranty
protection for a fee. term of the GPL.)
license may not disclaim implied warranty
licensor and licensee may agree to limit reverse engineering rights
DMCA and CFAA apply (reformed of course, but that's another story)
It seems to me that there needs to be some kind of
quid pro quo here. If a company that sells software
wants to use government-granted legal powers to
control its work, that has to be conditioned on not
using those powers just to protect irresponsible
Check it out—I'm "on Facebook"
again. Just fixed my gateway through
dlvr.it. If you're reading
this on Facebook, that's why.
Dlvr.it is a nifty service that will post to
social sites from an RSS feed. If you don't
run your own linklog feed, the good news is that
Pocket will generate RSS feeds from the articles you
so if you want to share links with people still on
Facebook, the combination of Pocket and dlvr.it
makes that easy to do without actually spending
human eyeball time there. My linklog
from my own weird
feedreader, which works
for me but not really ready for other users.
There's a story about Thomas Nelson,
leader of the Virginia Militia in the Revolutionary
During the siege and battle Nelson led the Virginia Militia whom he had personally organized and supplied with his own funds. Legend had it that Nelson ordered his artillery to direct their fire on his own house which was occupied by Cornwallis, offering five guineas to the first man who hit the house.
Would Facebook's owners do the same, now that we
know that foreign interests use Facebook to subvert
America? Probably not. The Nelson story is just an
unconfirmed patriotic anecdote, and we can't expect
that kind of thing from today's post-patriotic
investor class. Anyway, just seeing if I can
move Facebook's bots/eyeballs ratio up a little.
I'm thankful that the sewing
was invented a long time ago, not today. If the
sewing machine were invented today, most sewing
tutorials would be twice as long, because all the
thread would come in proprietary cartridges, and you
would usually have to hack the cartridge to get the
type of thread you need in a cartridge that works
with your machine.
Tracking protection is still hard. You have to
provide good protection from third-party tracking,
which users generally don't want, without breaking
legit third-party services such as content delivery
networks, single sign-on systems, and shopping carts.
Protection is a balance, similar to the problem of
filtering spam while delivering legit mail. Just as
spam filtering helps enable legit email marketing,
tracking protection tends to enable legit advertising
that supports journalism and cultural works.
In the long run, just as we have seen with spam
filters, it will be more important to make protection
hard to predict than to run the perfect protection
out of the box. Do
not repeat the tactics which have gained you
one victory, but let your methods be regulated
by the infinite variety of circumstances.
— Sun Tzu
A spam filter, or browser, that always does the same
thing will be analyzed and worked around. A mail
service that changes policies to respond to current
spam runs, or an unpredictable ecosystem of tracking
protection add-ons that browser users can install in
unpredictable combinations, is likely to be harder.
But most users aren't in the habit of installing
add-ons, so browsers will probably have to give them
a nudge, like Microsoft Windows does when it nags
the user to pick an antivirus package (or did last
time I checked.) So the decentralized way to catch
up to Apple could end up being something like:
When new tracking protection methods show up
in the privacy literature, quietly build the needed
browser add-on APIs to make it possible for
new add-ons to implement them.
Do user research to
guide the content and timing of nudges. (Some
prefer to be tracked, and should be offered a
chance to silence the warnings by affirmatively
choosing a do-nothing protection option.)
Help users share information about the pros and
cons of different tools. If a tool saves lots of
bandwidth and battery life but breaks some site's
comment form, help the user make the right choice.
Sponsor innovation challenges to incentivize
development, testing, and promotion of diverse
tracking protection tools.
Any surveillance marketer can install and test a
copy of Safari, but working around an explosion of
tracking protection tools would be harder. How to
set priorities when they don't know which tools will
What about adfraud?
Tracking protection strategies have to take adfraud
into account. Marketers have two choices for how to
deal with adfraud:
flight to quality
Flight to quality is better in the long run. But
it's a problem from the point of view of adtech
intermediaries because it moves more ad money to
high-reputation sites, and the whole point of adtech
is to reach big-money eyeballs on cheap sites.
Adtech firms would rather see surveillance-heavy
responses to adfraud. One way to help shift marketing
budgets away from surveillance, and toward flight
to quality, is to make the returns on surveillance
investments less predictable.
This is possible to do without making value
judgments about certain kinds of sites. If you like
a site enough to let it see your personal info,
you should be able to do it, even if in my humble
opinion it's a crappy site. But you can have this
option without extending to all crappy sites the
confidence that they'll be able to live on leaked
data from unaware users.
I have to admit that some people hate me, but I have to tell you something about hate. If sending an electronic advertisement through email warrants hate, then my answer to those people is "Get a life. Don't hate somebody for sending an advertisement through email." There are people out there that also like us.
According to spammers, spam filtering was just Internet
nerds complaining about something that regular users
actually like. But the spam debate ended when big
online services, starting with MSN, started talking
about how they build for their real users instead of
for Wallace's hypothetical spam-loving users.
If you missed the email spam debate,
don't worry. Wallace's talking
points about spam filters constantly get recycled by
surveillance marketers talking about tracking
But now it's not email spam that users supposedly
crave. Today, the Interactive Advertising Bureau
tells us that users want ads that "follow them around"
from site to site.
Enough background. Just as the email spam debate
ended with MSN's campaign, the third-party
web tracking debate ended on June 5,
With Intelligent Tracking Prevention, WebKit strikes a balance between user privacy and websites’ need for on-device storage. That said, we are aware that this feature may create challenges for legitimate website storage, i.e. storage not intended for cross-site tracking.
Surveillance marketers come up with all kinds of
hypothetical reasons why users might prefer targeted
ads. But in the real world, Apple invests time
and effort to understand user experience. When Apple
communicates about a feature, it's because that
feature is likely to keep a user satisfied
enough to buy more Apple devices. We can't read their
confidential user research, but we can see what the
company learned from it based on how they communicate
(Imagine for a minute that Apple's user research
had found that real live users are more like the
Interactive Advertising Bureau's idea of a user.
We might see announcements more like "Safari
automatically shares your health and financial
information with brands you love!" Anybody got one
of those to share?)
Saving an out-of-touch ad industry
Advertising supports journalism and cultural
works that would not otherwise exist.
It's too important not to save. Bob Hoffman
[H]ow can we encourage an acceptable version of online advertising that will allow us to enjoy the things we like about the web without the insufferable annoyance of the current online ad model?
The browser has to be part of the answer. If the
browser does its job, as Safari is doing, it can
play a vital role in re-connecting users with legit
advertising—just as users have come to trust
legit email newsletters now that they have effective
Safari's Intelligent Tracking Prevention is not the
final answer any more than Paul Graham's "A plan
for spam" was
the final spam filter. Adtech will evade protection
tools just as spammers did, and protection will have
to keep getting better. But at least now we can
finally say debate over, game on.
Looks like the spawn of Privacy Badger and cookie
double-keying, designed to balance user protection
from surveillance marketing with minimal breakage of
sites that depend on third-party resources.
(Now all the webmasters will fix stuff to make it
work with Intelligent Tracking Prevention, which
makes it easier for other browsers and privacy tools
to justify their own features to protect users.
Of course, now the surveillance marketers will rely
more on passive fingerprinting, and Apple has an
advantage there because there are fewer different
Safari-capable devices. But browsers need to fix
Apple does massive amounts of user research and
it's fun to watch the results leak through when
they communicate about features. Looks like they
have found that users care about being "followed"
from site to site by ads, and that users are still
pretty good at applied behavioral economics. The side
effect of tracking protection, of course, is that
it takes high-reputation sites out of competition
with the bottom-feeders to reach their own audiences,
so Intelligent Tracking Prevention is great news for
Meanwhile, I don't get Google's weak "filter"
Looks like a transparently publisher-hostile move
(since it blocks some potentially big-money
ads without addressing the problem of site
commodification), unless I'm missing something.
Benkler builds on the work of Ronald Coase, whose
The Nature of the Firm explains how transaction
costs affect when companies can be more efficient
ways to organize work than markets. Benkler adds
a third organizational model, peer production.
Peer production, commonly seen in open source
projects, is good at matching creative people to
As peer production relies on opening up access to resources for a relatively unbounded set of agents, freeing them to define and pursue an unbounded set of projects that are the best outcome of combining a particular individual or set of individuals with a particular set of resources, this open set of agents is likely to be more productive than the same set could have been if divided into bounded sets in firms.
Firms, markets, and peer production all have their
advantages, and in the real world, most productive
activity is mixed.
Managers in firms manage some production directly
and trade in markets for other production. This
connection in the firms/markets/peer production
tripod is as old as firms.
The open source software business is the second
connection. Managers in firms both manage software
production directly and sponsor peer production
projects, or manage employees who participate
But what about the third possible connection between legs of the tripod?
Is it possible to make a direct connection between
peer production and markets, one that doesn't go
through firms? And why would you want to connect peer
production directly to markets in the first place?
because that's where the money is, but because markets
are a good tool for getting information out of people,
and projects need information. Save the whole Kooths et al. paper to read
later. Best case against open source that I know
of—all the points that a serious open source
proponent needs to be able to address. Stefan Kooths,
Markus Langenfurth, and Nadine Kalwey wrote, in
"Open-Source Software: An Economic Assessment"
Developers lack key information due to the absence of pricing in open-source software. They do not have information concerning customers’ willingness to pay (= actual preferences), based on which production decisions would be made in the market process. Because of the absence of this information, supply does not automatically develop in line with the needs of the users, which may manifest itself as oversupply (excessive supply) or undersupply (excessive demand). Furthermore, the functional deficits in the software market also work their way up to the upstream factor markets (in particular, the labor market for developers) and–depending on the financing model of the open-source software development–to the downstream or parallel complementary markets (e.g., service markets) as well.
Because the open-source model at its core deliberately rejects the use of the market as a coordination mechanism and prevents the formation of price information, the above market functions cannot be satisfied by the open-source model. This results in a systematic disadvantage in the provision of software in the open-source model as compared to the proprietary production process.
The workaround is to connect peer production
to markets by way of firms. But the more that
connections between markets and peer production
projects have to go through firms, the more chances
to lose information. That's not because firms
are necessarily dysfunctional (although most are,
in different ways). A firm might rationally choose
to pay for the implementation of a feature that they
predict will get 100 new users, paying $5000 each,
instead of a feature that adds $1000 of value for
1000 existing users, but whose absence won't stop
them from renewing.
Some ways to connect peer production to markets
are already working. Crowdfunding for software
are furthest along, both offering support for
developers who have already built a reputation.
A decentralized form of connection is
which Balaji S. Srinivasan describes as a tradeable
version of API keys. If I believe that your network
service will be useful to me in the future, I can
pre-buy access to it. If I think your service will
really catch on, I can buy a bunch of extra tokens
and sell them later, without needing to involve you.
(and if your service needs network effects, now I
have an incentive to promote it, so that there will
be a seller's market for the tokens I hold.)
by Alexander Tabarrok, build on the crowdfunding
model, with the extra twist that the person proposing
the project has to put up some seed money that
is divided among backers if the project fails to
secure funding. This is supposed to bring in extra
investment early on, before a project looks likely
to meet its goal.
What happens when the software industry is forced to grow up?
I'm starting to think that finishing the tripod,
with better links from markets to peer production,
is going to matter a lot more soon, because of the
software quality problem.
Today's software, both proprietary and open
source, is distributed under ¯\_(ツ)_/¯ terms.
"Disclaimer of implied warranty of merchantability" is
lawyer-speak for "we reserve the right to half-ass our
jobs lol." As Zeynep Tufekci wrote in the New York
"The World Is Getting Hacked. Why Don’t We Do More
to Stop It?" At some point the users are going to
get fed up, and we're going to have to. An industry
as large and wealthy as software, still sticking to
Homebrew Computer Club-era disclaimers, is like a
40-something-year-old startup bro doing crimes and
claiming that they're just boyish hijinks. This whole
disclaimer of implied warranty thing is making us
look stupid, people. (No, I'm not for warranties
on software that counts as a scientific or technical
communication, or on bona fide collaborative development,
but on a product product? Come on.)
Grown-up software liability policy is coming,
but we're not ready for it. Quality software
is not just a technically hard problem. Today,
we're set up to move fast,
break things, and ship dancing pigs—with incentives
more powerful than incentives to build secure
software. Yes, you get the occasional DARPA
or tool to facilitate incremental
but most software is incentivized through too many
layers of principal-agent problems. Everything is
If governments try to fix software liability before
the software scene can fix the incentives problem,
then we will end up with a stifled, slowed-down
software scene, a few incumbent software companies
living on regulatory capture, and probably not much
real security benefit for users. But what if users
(directly or through their insurance companies) are
willing to pay to avoid the costs of broken software,
in markets, and open source developers are willing
to participate in peer production to make quality
software, but software firms are not set up to
What if there is another way to connect the "I would
rather pay a little more and not get h@x0r3d!" demand
to the "I would code that right and release it in open
source, if someone would pay for it" supply?
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
Bob Hoffman makes a good case for getting rid of user
in web advertising. But in order to take the next
steps, and not just talk among ourselves about things
that would be really great in the future, we first
need to think about the needs that tracking seems to
satisfy for legit marketers.
What I'm not going to do is pull out the
argument that's in every first comment on
every blog post that criticizes tracking: that
just technology and is somehow value-neutral.
Tracking, like all technologies, enables some kinds
of activity better than others. When tracking offers
marketers the opportunity to reach users based on
who the user is rather than on what they're reading,
watching, or listening to, then that means:
But if tracking is so bad, then why, when you go to
any message board or Q&A site that discusses marketing
for small businesses, is everyone discussing those
nasty, potentially civilization-extinguishing targeted
ads? Why is nobody popping up with a question on how
to make the next They Laughed When I Sat Down At the
Targeted ads are self-serve and easy to
get started with. If you have never bought
a Twitter or Facebook ad, get out your credit
card and start a stopwatch. These ads might be
but they have the lowest time investment of any legit
marketing project, so probably the only marketing
project that time-crunched startups can do.
Targeted ads keep your OODA loop tight. Yes,
running targeted ads can be addictive—If
you thought the the attention slot machine
on social sites was bad, try the advertiser
dashboard. But you're able to use them to
learn information that can help with the rest
of marketing. If you have the budget to exhibit
at one conference, compare Twitter ads targeted
to attendees of conference A with ads targeted to
attendees of conference B, and you're closer to
Marketing has two jobs: sell stuff to customers
and sell Marketing to management. Targeting is
great for the second one, since it comes with the
numbers that will help you take credit for results.
We're not going to be able to get rid of risky
tracking until we can understand the needs that it
fills, not just for big advertisers who can afford the
time and money to show up in Cannes every year, but
for the company founder who still has $1.99 business
and is doing all of Marketing themselves.
(The party line among web privacy people
can't just be that GDPR is going to
save us because the French powers that be
are all emmerdés ever since the surveillance/shitlord
tried to run a US-style game on their political
system. That might sound nice, but put not your trust
in princes, man. Even the most arrogant Eurocrats in
the world will not be able to regulate indefinitely
against all the legit business people in their
countries complaining that they can't do something
they see as essential. GDPR will be temporary air
for building an alternative, not a fix in itself.)
Post-creepy web advertising is still missing some key features.
Quick, low-risk service. With the exception
of the Project Wonderful
targeted ads are quick and low-risk,
while signal-carrying ads are the opposite.
A high-overhead direct ad sales process is not a
drop-in replacement for an easy web form.
I don't think that's all of them. But I don't
think that the move to post-creepy web advertising
is going to be a rush, all at once, either.
Brands that have fly-by-night low-reputation
competitors, brands that already have many
tracking-protected customers, and brands with
solid email lists are going to be able to move
faster than marketers who are still making tracking
work. More: Work together to fix web ads? Let's
I'm still two steps behind in devops
coolness for my network stuff. I don't even
have proper configuration management, and
that's fine because Configuration Management is an
now. Anyway, I still log in and actually run shell
commands on the server, and the LWN review of
mosh was helpful
to me. Now using mosh for connections that persist
across suspending the laptop and moving it from
network to network. More info: Mosh: the mobile
write a long Medium post apologizing to your users for failing
end date for IP Maximalism
When did serious "Intellectual Property Maximalism"
end? I'm going to put it at September 18,
which is the date that the Gates Foundation announced
funding for the Public Library of Science's
journal PLoS Neglected Tropical Diseases.
When it's a serious matter of people's health,
open access matters, even to the author of "Open
Letter to Hobbyists". Since then, IP Maximalism
stories have been mostly about rent-seeking
behavior, which had been a big part of the
freedom lovers's point all along. (Nobody quoted in this
story is pearl-clutching about "innovation",
for example: Supreme Court ruling threatens to
shut down cottage industry for small East Texas
Is it just me, or does it look to anyone else like the
man in the photo is checking the list of third-party
web trackers on the site to see who he can send a
National Security Letter to?
Could a US president who is untrustworthy
enough to be removed from office possibly be trustworthy
enough to comply with his side of a "Privacy
If it's necessary for the rest of the world to
free itself of its dependence on the U.S.,
does that apply to US-based Internet companies that
have become a bottleneck for news site ad revenue,
and how is that going to work?
If you're "verified" on
Twitter, you probably miss these, so I'll just
use my Fair Use rights to share that one with you.
Twitter is a uniquely influential medium, one that
shows up on the TV news every night and on news
sites all day. But somehow, the plan to make money
from Twitter is to run the same kind of targeted ads
that anyone with a WordPress site can. And the latest
Twitter news is a privacy update that includes, among
other things, more tracking of users from one site to
Yes, the same kind of thing that Facebook already
does, and better, with more users. And the same kind
of thing that any web site can already get from an
of companies. Boring.
If you want to stick this kind of ad on your
WordPress site, you just have to cut and paste some
ad network HTML—not build out a deluxe office
space on Market Street in San Francisco the way
Twitter has. But the result is about the same.
What makes Twitter even more facepalm-worthy is that
they make a point of not showing the ads to the
influential people who draw attention to Twitter to
start with. It's like they're posting a big sign
that says STUPID AD ZONE: UNIMPORTANT PEOPLE ONLY.
Twitter is building something unique, but they're
selling generic impressions that advertisers can get anywhere.
So as far as I can tell, the Twitter business model
is something like:
Money out: build something unique and expensive.
Money in: sell the most generic and shitty
thing in the world.
Facebook can make this work
because they have insane numbers of
Chump change per minute on Facebook still adds
up to real money. But Facebook is an outlier
on raw eyeball-minutes, and there aren't enough
minutes in the day for another. So Twitter
is on track to get sold for $500,000, like Digg
Which is good news for me because I know enough
Twitter users that I can get that kind of money
So why should you help me buy Twitter when you
could just get the $500,000 yourself? Because I have
a secret plan, of course. Twitter is the site that
everyone is talking about, right? So run the ads
that people will talk about. Here's the plan.
Sell one ad per day. And everybody sees the same one.
Sort of like the back cover of the magazine that
everybody in the world reads (but there is no such
magazine, so that's why this is an opportunity.)
No more need to excuse the verified users from
the ads. Yes, an advertiser will have to provide a
variety of sizes and localizations for each ad (and
yes, Twitter will have to check that the translations
match). But it's the same essential ad, shown to
every Twitter user in the world for 24 hours.
No point trying to out-Facebook Facebook
or out-Lumascape the Lumascape.
Targeted ads are weak on
and a bunch of other companies are doing them more
cost-effectively and at higher volume, anyway.
Of course, this is not for everybody. It's for
brands that want to use a memorable, creative ad to
try for the same kind of global signal boost that
a good Tweet® can get. But if you want generic
targeted ads you can get those everywhere else on the
Internet. Where else can you get signal? In order
to beat current Twitter revenue, the One Twitter
Ad needs to go for about the same price as a Super
Bowl commercial. But if Twitter stays influential,
that's reasonable, and I make back the 500 grand and a lot more.
Internet users have been asking what they can do to protect their own data from this creepy, non-consensual tracking by Internet providers—for example, directing their Internet traffic through a VPN or Tor. One idea to combat this that’s recently gotten a lot of traction among privacy-conscious users is data pollution tools: software that fills your browsing history with visits to random websites in order to add “noise” to the browsing data that your Internet provider is collecting.
[T]here are currently too many limitations and too many unknowns to be able to confirm that data pollution is an effective strategy at protecting one’s privacy. We’d love to eventually be proven wrong, but for now, we simply cannot recommend these tools as an effective method for protecting your privacy.
This is one of those "two problems one solution"
The problem for makers and users of "data
pollution" or spoofing tools is QA. How do you
know that your tool is working? Or are surveillance
marketers just filtering out the impressions
created by the tool, on the server side?
The problem for companies using so-called Non-Human
Traffic (NHT) is that when users discover
NHT software (bots), the users tend to remove
it. What would make users choose to participate
in NHT schemes so that the NHT software can run for
longer and build up more valuable profiles?
So what if the makers of spoofing tools could get a
live QA metric, and NHT software maintainers could
give users an incentive to install and use their
NHT market as a tool for discovering information
Imagine a spoofing tool that offers an easy way
to buy bot pageviews, I mean buy Perfectly
Legitimate Data on how fast a site loads from various
home Internet connections. When the tool connects
to its server for an update, it gets a list of URLs
to visit—a mix of random sites, popular sites,
and paying customers.
Now the spoofing tool maintainer will be able to to
tell right away if the tool is really generating
realistic traffic, by looking at the market price
of pageviews. The maintainer will even be able to
tell whose tracking the tool can beat, by looking
at which third-party resources are included on the
pages getting paid-for traffic.
The money probably won't be significant, since real
web ad money is moving to whitelisted, legit sites
and away from fraud-susceptible schemes anyway, but
in the meantime it's a way to measure effectiveness.
Setting up a couple of Linux systems to work
which is one of the things that I'm up to at work
FilterBubbler is a
and the setup instructions use
so I need NPM. In order to keep all the NPM stuff
under my own home directory, but still put the
web-ext tool on my $PATH, I need to make one-line
edits to three files.
One line in ~/.npmrc
prefix = ~/.npm
One line in ~/.gitignore
One line in ~/.bashrc
(My /bashrc has a bunch of export PATH= lines
so that when I add or remove one it's more likely to
get a clean merge. Because home directory in git.) I
think that's it. Now I can do
npm install --global web-ext
with no sudo or mess. And when I clone my home directory on another system it will just work.
(This is an answer to a question on
Twitter is the new blog comments (for now) and I'm
more likely to see comments there than to have time
to set up and moderate comments here.)
Adfraud is an easy way to make mad cash, adtech is
happily supporting it, and it all works because the
system has enough layers between CMO and fraud hacker
that everybody can stay as clean as they need to.
Users bear the privacy risks of adfraud, legit publishers pay for
and adtech makes more money from adfraud than fraud
hackers do. Adtech doesn't have to communicate or
coordinate with adfraud, just set up a fraud-friendly
system and let the actual fraud hackers go to work.
Bad for users, people who make legit sites, and civilization in
But one piece of good news is that adfraud can change
quickly. Adfraud hackers don't have time to get stuck
in conventional ways of doing things, because adfraud
is so lucrative that the high-skill players don't have
to stay in it for very long. The adfraud hackers
who were most active last fall have retired to run their
resorts or recording studios or wineries or whatever.
So how can privacy tools get a piece of the action?
One random idea is for an obfuscation tool
to participate in the market for so-called sourced
Fraud hackers need real-looking traffic and are
willing to pay for it. Supplying that traffic is
sketchy but legal. Which is perfect, because put
one more layer on top of it and it's not even sketchy.
And who needs to know if they're doing a good job
at generating real-looking traffic? Obfuscation tool
maintainers. Even if you write a great obfuscation
tool, you never really know if your tricks for helping
users beat surveillance are actually working, or if
your tool's traffic is getting quietly identified on
the server side.
In proposed new privacy tool model, outsourced QA pays YOU!
Set up a market where a Perfectly Legitimate Site
that is looking for sourced traffic can go to buy
pageviews, I mean buy Perfectly Legitimate Data on
how fast a site loads from various home Internet
connections. When the obfuscation tool connects to
its server for an update, it gets a list of URLs
to visit—a mix of random, popular sites and
Set a minimum price for pageviews that's high enough
to make it cost-ineffective for DDoS. Don't allow it
to be used on random sites, only those that the buyer
controls. Make them put a secret in an unlinked-to
URL or something. And if an obfuscation tool isn't
well enough sandboxed to visit a site that's doing
traffic sourcing, it isn't well enough sandboxed to
surf the web unsupervised at all.
Now the obfuscation tool maintainer will be able to
to tell right away if the tool is really generating
realistic traffic, by looking at the market price.
The maintainer will even be able to tell whose
tracking the tool can beat, by looking at which
third-party resources are included on the pages
getting paid-for traffic. And the whole thing can be
done by stringing together stuff that IAB members are
already doing, so they would look foolish to complain
If you want people on the Internet to argue with you, say that you're making a statement about values.
If you want people to negotiate with you, say that you're making a statement about business.
If you want people to accept that something is inevitable, say that you're making a statement about technology.
The mixup between values arguments, business
arguments, and technology arguments might be
why people are confused about Brands need to fire
by Doc Searls.
The set of trends that people call adtech is a
values-driven business transformation that is trying
to label itself as a technological transformation.
Some of the implementation involves technological
changes (NoSQL databases! Nifty!) but fundamentally
adtech is about changing how media business is
done. Adtech does have a set of values, none
of which are really commonly held even among people in
the marketing or advertising field, but let's not make
the mistake of turning this into either an argument
about values (that never accomplishes anything)
or a set of statements about technology (that puts
those with an inside POV on current technology at an
unnecessary advantage). Instead, let's look at the
business positions that adtech is taking.
Adtech stands for profitable
platforms, with commodity producers
of news and cultural works. Michael
Tiffany, CEO of advertising security firm White Ops,
saidThe fundamental value proposition of these ad tech
companies who are de-anonymizing the Internet is,
Why spend big CPMs on branded sites when I can
get them on no-name sites? This is not a
healthy situation, but it's a chosen path, not a
technologically inevitable one.
Adtech stands for making advertisers
support criminal and politically heinous
activity.I'll just let Bob Hoffman explain that
Fraudulent and brand-unsafe content is just the
overspray of the high value platforms/commoditized
content system, and advertisers have to accept
it in order to power that system. Or do they?
People have a lot of interesting decisions to make:
policy, contractual, infrastructural, and client-side.
When we treat the adtech movement as simply
technology, we take the risk of missing great
opportunities to negotiate for the benefit of brands,
publishers, and the audience.
This is a brand new blog, so I'm setting up
the basics. I just realized that I got the
whole thing working without a single script,
image, or HTML table. (These kids today
have it easy, with their media queries and CSS
One big question that I'm wondering about is: how many of the people
who visit here are using some kind of protection
from third-party tracking? Third-party tracking
has been an unfixed vulnerability in web browsers
for a long time. Check out the Unofficial Cookie
FAQ from 1997.
Third-party cookies are in there...and we're
still dealing with the third-party tracking problem?
In order to see how bad the problem is on this site,
I'm going to set up a little bit of _first_-party
data collection to measure people's vulnerability to
_third_-party data collection.
The three parts of that big question are:
Can a third-party tracker see state from other sites?
All it does is swap out the tracking image source three times.
When the Aloodo tracking script runs, to check if this browser is blocking the script from loading.
When the Aloodo script confirms that tracking is possible.
The work is done in the setupAloodo function,
which runs after the page loads. First, it sets the
src for the tracking pixel to js.png, then sets
up two callbacks: one to run after the Aloodo script
is loaded, and switch the image to ld.png, and
one to run if the script can track the user,
and switch the image to td.png.
Step three: check the logs
Now I can use the regular server logs to compare
the number of clients that load the original image,
load the two tracking images.
Metalsmith is pretty fun. The basic pipeline from
the article seems to work pretty well, but I ran
into a couple of issues. I might have solved these
in ways that are completely wrong, but here's what
works for me.
First, I needed to figure out how to get text from
an earlier stage of the pipeline. My Metalsmith
build is pretty basic:
turn Markdown into HTML (plus article metadata
apply a template to turn the HTML version into
a complete page.
That's great, but the problem seems to be with getting
a copy of just the HTML from step 1 for building the
index page and the RSS feed. I don't
want the entire HTML page from step 2, just the inner
HTML from step 1.
The solution seems to be
This doesn't actually strip off the template, just
lets you capture an extra copy of the HTML before
templatization. This goes into the pipeline after "markdown"
but before the "layouts" step.
Select Browse, then Browse Local, then select the .qcow2 file.
That's it. Now looking at a virtual MS-Windows guest
that I can use for those troublesome web conferences
(and for testing web sites under MSIE. If you try
the tracking test,
it should take you to a protection page that prompts
you to turn on the EasyPrivacy Tracking Protection
List. That's a quick and easy way to speed up your
web browsing experience on MSIE.)
I have come to believe that advertising is the
original sin of the web. The fallen state of our
Internet is a direct, if unintentional, consequence of
choosing advertising as the default model to support
online content and services.
[T]he advertising industry has become the web's lapdog
– irresponsibly exaggerating the effectiveness of
online advertising and social media, ignoring the
abominable results of display advertising, glossing
over the fraud and corruption, and becoming a de
facto sales arm for the online ad industry.
Advertising can be a good thing.
Some of my favorite cultural goods are
paid for by advertising at its best. There should
be a way to make advertising work for the web, the
way it has worked for print magazines.
But Hoffman and Zuckerman are both right.
Web advertising has failed. We're throwing
away most of the potential value of the web as an ad
by failing to fix privacy bugs. Web ads today
work more like email spam than like
magazine ads. The quest for "relevance" not only
makes targeted ads less valuable than untargeted
ones, but also wastes most of what advertisers
spend. Buy an ad on the web, and more of your money goes to
than to the content that helps your ad carry a signal.
From Zuckerman's point of view, advertising
is a problem, because advertising is full
of creepy stuff. From Hoffman's point
of view, the web is a problem, because
the web is full of creepy stuff. (Bonus
link: Big Brother Has Arrived, and He's
So let's re-introduce the web to advertising,
only this time, let's try it without the creepy
Brand advertisers and web content people have a lot
more in common than either one has with database
marketing. There are a lot of great opportunities
on the post-creepy web, but the first step is to get
the right people talking.
Andrew Cowie has written something
The main thing that this one does differently is to
ask make which files matter to it, instead of doing
an inotifywatch on the whole directory. Comments and
The process is going to be a little different from
what you might be used to with another OS. If you
shop carefully (and reading blogs is a good first
step) then the drivers you will need are already
available through your Linux distribution's printer
HP has done a good job with enabling this.
The company has already released the necessary
printer software as open source, and your Linux
distribution has already installed it. So, go to
printers fully supported with the HPLIP software, pick a
printer you like, and you're done.
If you want a recommendation from me, the
HP LaserJet 3055,
a black and white all-in-one device,
has worked fine for me with various Linux setups
for years. It's also a scanner/copier/fax machine,
and you get the extra functionality for not much more
than the price of a regular printer. It also comes
with a good-sized toner cartridge, so your cost per
page is probably going to be pretty reasonable.
Other printer brands have given me more grief, but
fortunately the HP LaserJets are widely available
and don't jam much.
It's important not to show a smug expression on your
face while printing if users of non-Linux OSs are
still dealing with driver CDs or vendor downloads.
When you give travel directions, you include
landmarks, and "gone too far" points. Turn left after
you cross the bridge. Then look for my street and
make a right. If you go past the water tower you've
gone too far.
System administration instructions are much easier
to follow if they include those kind of check-ins
there, too. For example, if you explain how to set
up server software you can put in quick "landmark"
tests, such as, "at this point, you can run nmap and
see the port in the results." You can also include
"gone too far" information by pointing out problems
you can troubleshoot on the way.
A full-scale troubleshooting guide is a good idea,
but quick warning signs as you go along are helpful.
Much better than finding yourself lost at the end of
a long set of setup instructions.
doesn't accept dotted quads for ranges, but
fortunately most of the commands that accept an IP
address will also take it in the form of a regular
decimal. (Spammers used to use this to hide their
naughty domains from scanners that only looked for
the dotted quad while the browser would happily go to
So here's an ugly-ass shell function to convert an
IP address to a decimal. If you have a better one,
please let me know and I'll update this page. (Yes,
I know this would be one line in Perl.)
if [ $(echo $1 | grep -q '\.') ]; then
dq2int $(echo $1 | tr '.' ' ')
elif [ $# -eq 1 ]; then
total=$1; next=$2; shift 2
dq2int $(($total*2**8+$next)) $@
It says "Personal and Confidential" or "IMPORTANT
CORRESPONDENCE REGARDING YOUR OVERPAYMENT" on the
envelope—can you really discard it without
opening it? You sure can. Some junk mailers disguise
their mail pieces as important correspondence from
companies you actually do business with, and the
USPS helped them out a lot by renaming "Bulk Mail"
to "Standard Mail". But you can look at the postage
to discard "stealth" junk mail without opening it.
that any bills or mail containing specific
information about your business relationship with
the company must be mailed First Class.
So, if "Standard Mail" or "STD" appears in the upper
right corner, it's not a bill, it's not your new
credit card, and it's not a check. It's just sneaky
All that is really needed on computers
is a "Calculate" button or omnipresent menu command
that allows you to take an arithmetic expression,
like 248.93 / 375, select it, and do the calculation
whether in the word processor, communications package,
drawing or presentation application or just at the
Fortunately, there's a blue "Access
IBM" button on this keyboard that doesn't do much.
So, I configured tpb
"Access IBM" do this:
If you want to do this, besides tpb, you'll need xsel and xte, which is part of xautomation. If you don't have an unused button, you could also set up a binding in your window manager or build a big red outboard USB "eval" button or something.
If you make a new ssh key and try to use it with ssh -i while running ssh-agent, ssh tries
the agent first. You could end up using a key provided by the agent
instead of the one you specify. You can fix this without killing
the agent. Use:
The most important part of picking a distribution
is thinking about where you will go for help, and
what distribution that source of help understands.
That's true if your source of help is a vendor,
a consultant, or a users group.
If you're getting into uses for Linux that are
different from those of your local user group,
it's more important to use a list of people like
you than just the geographically closest user
group. For example, if you're planning to set
up a Linux-based recording studio and your local
LUG is all about running web sites and playing Crimson Fields, you might want to get on the
Planet CCRMA mailing list, and get your Linux
distribution recommendations there.
If you have a script that uses ssh, here's something
to put at the beginning of the script to make sure
the necessary passphrase has already been entered, and the
remote host is reachable, before starting a time-consuming
operation such as an rsync.