I'm thankful that the sewing
was invented a long time ago, not today. If the
sewing machine were invented today, most sewing
tutorials would be twice as long, because all the
thread would come in proprietary cartridges, and you
would usually have to hack the cartridge to get the
type of thread you need in a cartridge that works
with your machine.
Tracking protection is still hard. You have to
provide good protection from third-party tracking,
which users generally don't want, without breaking
legit third-party services such as content delivery
networks, single sign-on systems, and shopping carts.
Protection is a balance, similar to the problem of
filtering spam while delivering legit mail. Just as
spam filtering helps enable legit email marketing,
tracking protection tends to enable legit advertising
that supports journalism and cultural works.
In the long run, just as we have seen with spam
filters, it will be more important to make protection
hard to predict than to run the perfect protection
out of the box. A spam filter, or browser, that
always does the same thing will be analyzed and
worked around. A mail service that changes policies
to respond to current spam runs, or an unpredictable
ecosystem of tracking protection add-ons
that browser users can install in unpredictable
combinations, is likely to be harder.
But most users aren't in the habit of installing
add-ons, so browsers will probably have to give them
a nudge, like Microsoft Windows does when it nags
the user to pick an antivirus package (or did last
time I checked.) So the decentralized way to catch
up to Apple could end up being something like:
When new tracking protection methods show up
in the privacy literature, quietly build the needed
browser add-on APIs to make it possible for
new add-ons to implement them.
Do user research to
guide the content and timing of nudges. (Some
prefer to be tracked, and should be offered a
chance to silence the warnings by affirmatively
choosing a do-nothing protection option.)
Help users share information about the pros and
cons of different tools. If a tool saves lots of
bandwidth and battery life but breaks some site's
comment form, help the user make the right choice.
Sponsor innovation challenges to incentivize
development, testing, and promotion of diverse
tracking protection tools.
Any surveillance marketer can install and test a
copy of Safari, but working around an explosion of
tracking protection tools would be harder. How to
set priorities when they don't know which tools will
What about adfraud?
Tracking protection strategies have to take adfraud
into account. Marketers have two choices for how to
deal with adfraud:
flight to quality
Flight to quality is better in the long run. But
it's a problem from the point of view of adtech
intermediaries because it moves more ad money to
high-reputation sites, and the whole point of adtech
is to reach big-money eyeballs on cheap sites.
Adtech firms would rather see surveillance-heavy
responses to adfraud. One way to help shift marketing
budgets away from surveillance, and toward flight
to quality, is to make the returns on surveillance
investments less predictable.
This is possible to do without making value
judgments about certain kinds of sites. If you like
a site enough to let it see your personal info,
you should be able to do it, even if in my humble
opinion it's a crappy site. But you can have this
option without extending to all crappy sites the
confidence that they'll be able to live on leaked
data from unaware users.
I have to admit that some people hate me, but I have to tell you something about hate. If sending an electronic advertisement through email warrants hate, then my answer to those people is "Get a life. Don't hate somebody for sending an advertisement through email." There are people out there that also like us.
According to spammers, spam filtering was just Internet
nerds complaining about something that regular users
actually like. But the spam debate ended when big
online services, starting with MSN, started talking
about how they build for their real users instead of
for Wallace's hypothetical spam-loving users.
If you missed the email spam debate,
don't worry. Wallace's talking
points about spam filters constantly get recycled by
surveillance marketers talking about tracking
But now it's not email spam that users supposedly
crave. Today, the Interactive Advertising Bureau
tells us that users want ads that "follow them around"
from site to site.
Enough background. Just as the email spam debate
ended with MSN's campaign, the third-party
web tracking debate ended on June 5,
With Intelligent Tracking Prevention, WebKit strikes a balance between user privacy and websites’ need for on-device storage. That said, we are aware that this feature may create challenges for legitimate website storage, i.e. storage not intended for cross-site tracking.
Surveillance marketers come up with all kinds of
hypothetical reasons why users might prefer targeted
ads. But in the real world, Apple invests time
and effort to understand user experience. When Apple
communicates about a feature, it's because that
feature is likely to keep a user satisfied
enough to buy more Apple devices. We can't read their
confidential user research, but we can see what the
company learned from it based on how they communicate
(Imagine for a minute that Apple's user research
had found that real live users are more like the
Interactive Advertising Bureau's idea of a user.
We might see announcements more like "Safari
automatically shares your health and financial
information with brands you love!" Anybody got one
of those to share?)
Saving an out-of-touch ad industry
Advertising supports journalism and cultural
works that would not otherwise exist.
It's too important not to save. Bob Hoffman
[H]ow can we encourage an acceptable version of online advertising that will allow us to enjoy the things we like about the web without the insufferable annoyance of the current online ad model?
The browser has to be part of the answer. If the
browser does its job, as Safari is doing, it can
play a vital role in re-connecting users with legit
advertising—just as users have come to trust
legit email newsletters now that they have effective
Safari's Intelligent Tracking Prevention is not the
final answer any more than Paul Graham's "A plan
for spam" was
the final spam filter. Adtech will evade protection
tools just as spammers did, and protection will have
to keep getting better. But at least now we can
finally say debate over, game on.
Looks like the spawn of Privacy Badger and cookie
double-keying, designed to balance user protection
from surveillance marketing with minimal breakage of
sites that depend on third-party resources.
(Now all the webmasters will fix stuff to make it
work with Intelligent Tracking Prevention, which
makes it easier for other browsers and privacy tools
to justify their own features to protect users.
Of course, now the surveillance marketers will rely
more on passive fingerprinting, and Apple has an
advantage there because there are fewer different
Safari-capable devices. But browsers need to fix
Apple does massive amounts of user research and
it's fun to watch the results leak through when
they communicate about features. Looks like they
have found that users care about being "followed"
from site to site by ads, and that users are still
pretty good at applied behavioral economics. The side
effect of tracking protection, of course, is that
it takes high-reputation sites out of competition
with the bottom-feeders to reach their own audiences,
so Intelligent Tracking Prevention is great news for
Meanwhile, I don't get Google's weak "filter"
Looks like a transparently publisher-hostile move
(since it blocks some potentially big-money
ads without addressing the problem of site
commodification), unless I'm missing something.
Benkler builds on the work of Ronald Coase, whose
The Nature of the Firm explains how transaction
costs affect when companies can be more efficient
ways to organize work than markets. Benkler adds
a third organizational model, peer production.
Peer production, commonly seen in open source
projects, is good at matching creative people to
As peer production relies on opening up access to resources for a relatively unbounded set of agents, freeing them to define and pursue an unbounded set of projects that are the best outcome of combining a particular individual or set of individuals with a particular set of resources, this open set of agents is likely to be more productive than the same set could have been if divided into bounded sets in firms.
Firms, markets, and peer production all have their
advantages, and in the real world, most productive
activity is mixed.
Managers in firms manage some production directly
and trade in markets for other production. This
connection in the firms/markets/peer production
tripod is as old as firms.
The open source software business is the second
connection. Managers in firms both manage software
production directly and sponsor peer production
projects, or manage employees who participate
But what about the third possible connection between legs of the tripod?
Is it possible to make a direct connection between
peer production and markets, one that doesn't go
through firms? And why would you want to connect peer
production directly to markets in the first place?
because that's where the money is, but because markets
are a good tool for getting information out of people,
and projects need information. Stefan Kooths,
Markus Langenfurth, and Nadine Kalwey wrote, in
"Open-Source Software: An Economic Assessment"
Developers lack key information due to the absence of pricing in open-source software. They do not have information concerning customers’ willingness to pay (= actual preferences), based on which production decisions would be made in the market process. Because of the absence of this information, supply does not automatically develop in line with the needs of the users, which may manifest itself as oversupply (excessive supply) or undersupply (excessive demand). Furthermore, the functional deficits in the software market also work their way up to the upstream factor markets (in particular, the labor market for developers) and–depending on the financing model of the open-source software development–to the downstream or parallel complementary markets (e.g., service markets) as well.
Because the open-source model at its core deliberately rejects the use of the market as a coordination mechanism and prevents the formation of price information, the above market functions cannot be satisfied by the open-source model. This results in a systematic disadvantage in the provision of software in the open-source model as compared to the proprietary production process.
The workaround is to connect peer production
to markets by way of firms. But the more that
connections between markets and peer production
projects have to go through firms, the more chances
to lose information. That's not because firms
are necessarily dysfunctional (although most are,
in different ways). A firm might rationally choose
to pay for the implementation of a feature that they
predict will get 100 new users, paying $5000 each,
instead of a feature that adds $1000 of value for
1000 existing users, but whose absence won't stop
them from renewing.
Some ways to connect peer production to markets
are already working. Crowdfunding for software
are furthest along, both offering support for
developers who have already built a reputation.
A decentralized form of connection is
which Balaji S. Srinivasan describes as a tradeable
version of API keys. If I believe that your network
service will be useful to me in the future, I can
pre-buy access to it. If I think your service will
really catch on, I can buy a bunch of extra tokens
and sell them later, without needing to involve you.
(and if your service needs network effects, now I
have an incentive to promote it, so that there will
be a seller's market for the tokens I hold.)
by Alexander Tabarrok, build on the crowdfunding
model, with the extra twist that the person proposing
the project has to put up some seed money that
is divided among backers if the project fails to
secure funding. This is supposed to bring in extra
investment early on, before a project looks likely
to meet its goal.
What happens when the software industry is forced to grow up?
I'm starting to think that finishing the tripod,
with better links from markets to peer production,
is going to matter a lot more soon, because of the
software quality problem.
Today's software, both proprietary and open
source, is distributed under ¯\_(ツ)_/¯ terms.
"Disclaimer of implied warranty of merchantability" is
lawyer-speak for "we reserve the right to half-ass our
jobs lol." As Zeynep Tufekci wrote in the New York
"The World Is Getting Hacked. Why Don’t We Do More
to Stop It?" At some point the users are going to
get fed up, and we're going to have to. An industry
as large and wealthy as software, still sticking to
Homebrew Computer Club-era disclaimers, is like a
40-something-year-old startup bro doing crimes and
claiming that they're just boyish hijinks. This whole
disclaimer of implied warranty thing is making us
look stupid, people. (No, I'm not for warranties
on software that counts as a scientific or technical
communication, or on bona fide collaborative development,
but on a product product? Come on.)
Grown-up software liability policy is coming,
but we're not ready for it. Quality software
is not just a technically hard problem. Today,
we're set up to move fast,
break things, and ship dancing pigs—with incentives
more powerful than incentives to build secure
software. Yes, you get the occasional DARPA
or tool to facilitate incremental
but most software is incentivized through too many
layers of principal-agent problems. Everything is
If governments try to fix software liability before
the software scene can fix the incentives problem,
then we will end up with a stifled, slowed-down
software scene, a few incumbent software companies
living on regulatory capture, and probably not much
real security benefit for users. But what if users
(directly or through their insurance companies) are
willing to pay to avoid the costs of broken software,
in markets, and open source developers are willing
to participate in peer production to make quality
software, but software firms are not set up to
What if there is another way to connect the "I would
rather pay a little more and not get h@x0r3d!" demand
to the "I would code that right and release it in open
source, if someone would pay for it" supply?
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
Bob Hoffman makes a good case for getting rid of user
in web advertising. But in order to take the next
steps, and not just talk among ourselves about things
that would be really great in the future, we first
need to think about the needs that tracking seems to
satisfy for legit marketers.
What I'm not going to do is pull out the
argument that's in every first comment on
every blog post that criticizes tracking: that
just technology and is somehow value-neutral.
Tracking, like all technologies, enables some kinds
of activity better than others. When tracking offers
marketers the opportunity to reach users based on
who the user is rather than on what they're reading,
watching, or listening to, then that means:
But if tracking is so bad, then why, when you go to
any message board or Q&A site that discusses marketing
for small businesses, is everyone discussing those
nasty, potentially civilization-extinguishing targeted
ads? Why is nobody popping up with a question on how
to make the next They Laughed When I Sat Down At the
Targeted ads are self-serve and easy to
get started with. If you have never bought
a Twitter or Facebook ad, get out your credit
card and start a stopwatch. These ads might be
but they have the lowest time investment of any legit
marketing project, so probably the only marketing
project that time-crunched startups can do.
Targeted ads keep your OODA loop tight. Yes,
running targeted ads can be addictive—If
you thought the the attention slot machine
on social sites was bad, try the advertiser
dashboard. But you're able to use them to
learn information that can help with the rest
of marketing. If you have the budget to exhibit
at one conference, compare Twitter ads targeted
to attendees of conference A with ads targeted to
attendees of conference B, and you're closer to
Marketing has two jobs: sell stuff to customers
and sell Marketing to management. Targeting is
great for the second one, since it comes with the
numbers that will help you take credit for results.
We're not going to be able to get rid of risky
tracking until we can understand the needs that it
fills, not just for big advertisers who can afford the
time and money to show up in Cannes every year, but
for the company founder who still has $1.99 business
and is doing all of Marketing themselves.
(The party line among web privacy people
can't just be that GDPR is going to
save us because the French powers that be
are all emmerdés ever since the surveillance/shitlord
tried to run a US-style game on their political
system. That might sound nice, but put not your trust
in princes, man. Even the most arrogant Eurocrats in
the world will not be able to regulate indefinitely
against all the legit business people in their
countries complaining that they can't do something
they see as essential. GDPR will be temporary air
for building an alternative, not a fix in itself.)
Post-creepy web advertising is still missing some key features.
Quick, low-risk service. With the exception
of the Project Wonderful
targeted ads are quick and low-risk,
while signal-carrying ads are the opposite.
A high-overhead direct ad sales process is not a
drop-in replacement for an easy web form.
I don't think that's all of them. But I don't
think that the move to post-creepy web advertising
is going to be a rush, all at once, either.
Brands that have fly-by-night low-reputation
competitors, brands that already have many
tracking-protected customers, and brands with
solid email lists are going to be able to move
faster than marketers who are still making tracking
work. More: Work together to fix web ads? Let's
I'm still two steps behind in devops
coolness for my network stuff. I don't even
have proper configuration management, and
that's fine because Configuration Management is an
now. Anyway, I still log in and actually run shell
commands on the server, and the LWN review of
mosh was helpful
to me. Now using mosh for connections that persist
across suspending the laptop and moving it from
network to network. More info: Mosh: the mobile
write a long Medium post apologizing to your users for failing
end date for IP Maximalism
When did serious "Intellectual Property Maximalism"
end? I'm going to put it at September 18,
which is the date that the Gates Foundation announced
funding for the Public Library of Science's
journal PLoS Neglected Tropical Diseases.
When it's a serious matter of people's health,
open access matters, even to the author of "Open
Letter to Hobbyists". Since then, IP Maximalism
stories have been mostly about rent-seeking
behavior, which had been a big part of the
freedom lovers's point all along. (Nobody quoted in this
story is pearl-clutching about "innovation",
for example: Supreme Court ruling threatens to
shut down cottage industry for small East Texas
Is it just me, or does it look to anyone else like the
man in the photo is checking the list of third-party
web trackers on the site to see who he can send a
National Security Letter to?
Could a US president who is untrustworthy
enough to be removed from office possibly be trustworthy
enough to comply with his side of a "Privacy
If it's necessary for the rest of the world to
free itself of its dependence on the U.S.,
does that apply to US-based Internet companies that
have become a bottleneck for news site ad revenue,
and how is that going to work?
If you're "verified" on
Twitter, you probably miss these, so I'll just
use my Fair Use rights to share that one with you.
Twitter is a uniquely influential medium, one that
shows up on the TV news every night and on news
sites all day. But somehow, the plan to make money
from Twitter is to run the same kind of targeted ads
that anyone with a WordPress site can. And the latest
Twitter news is a privacy update that includes, among
other things, more tracking of users from one site to
Yes, the same kind of thing that Facebook already
does, and better, with more users. And the same kind
of thing that any web site can already get from an
of companies. Boring.
If you want to stick this kind of ad on your
WordPress site, you just have to cut and paste some
ad network HTML—not build out a deluxe office
space on Market Street in San Francisco the way
Twitter has. But the result is about the same.
What makes Twitter even more facepalm-worthy is that
they make a point of not showing the ads to the
influential people who draw attention to Twitter to
start with. It's like they're posting a big sign
that says STUPID AD ZONE: UNIMPORTANT PEOPLE ONLY.
Twitter is building something unique, but they're
selling generic impressions that advertisers can get anywhere.
So as far as I can tell, the Twitter business model
is something like:
Money out: build something unique and expensive.
Money in: sell the most generic and shitty
thing in the world.
Facebook can make this work
because they have insane numbers of
Chump change per minute on Facebook still adds
up to real money. But Facebook is an outlier
on raw eyeball-minutes, and there aren't enough
minutes in the day for another. So Twitter
is on track to get sold for $500,000, like Digg
Which is good news for me because I know enough
Twitter users that I can get that kind of money
So why should you help me buy Twitter when you
could just get the $500,000 yourself? Because I have
a secret plan, of course. Twitter is the site that
everyone is talking about, right? So run the ads
that people will talk about. Here's the plan.
Sell one ad per day. And everybody sees the same one.
Sort of like the back cover of the magazine that
everybody in the world reads (but there is no such
magazine, so that's why this is an opportunity.)
No more need to excuse the verified users from
the ads. Yes, an advertiser will have to provide a
variety of sizes and localizations for each ad (and
yes, Twitter will have to check that the translations
match). But it's the same essential ad, shown to
every Twitter user in the world for 24 hours.
No point trying to out-Facebook Facebook
or out-Lumascape the Lumascape.
Targeted ads are weak on
and a bunch of other companies are doing them more
cost-effectively and at higher volume, anyway.
Of course, this is not for everybody. It's for
brands that want to use a memorable, creative ad to
try for the same kind of global signal boost that
a good Tweet® can get. But if you want generic
targeted ads you can get those everywhere else on the
Internet. Where else can you get signal? In order
to beat current Twitter revenue, the One Twitter
Ad needs to go for about the same price as a Super
Bowl commercial. But if Twitter stays influential,
that's reasonable, and I make back the 500 grand and a lot more.
Internet users have been asking what they can do to protect their own data from this creepy, non-consensual tracking by Internet providers—for example, directing their Internet traffic through a VPN or Tor. One idea to combat this that’s recently gotten a lot of traction among privacy-conscious users is data pollution tools: software that fills your browsing history with visits to random websites in order to add “noise” to the browsing data that your Internet provider is collecting.
[T]here are currently too many limitations and too many unknowns to be able to confirm that data pollution is an effective strategy at protecting one’s privacy. We’d love to eventually be proven wrong, but for now, we simply cannot recommend these tools as an effective method for protecting your privacy.
This is one of those "two problems one solution"
The problem for makers and users of "data
pollution" or spoofing tools is QA. How do you
know that your tool is working? Or are surveillance
marketers just filtering out the impressions
created by the tool, on the server side?
The problem for companies using so-called Non-Human
Traffic (NHT) is that when users discover
NHT software (bots), the users tend to remove
it. What would make users choose to participate
in NHT schemes so that the NHT software can run for
longer and build up more valuable profiles?
So what if the makers of spoofing tools could get a
live QA metric, and NHT software maintainers could
give users an incentive to install and use their
NHT market as a tool for discovering information
Imagine a spoofing tool that offers an easy way
to buy bot pageviews, I mean buy Perfectly
Legitimate Data on how fast a site loads from various
home Internet connections. When the tool connects
to its server for an update, it gets a list of URLs
to visit—a mix of random sites, popular sites,
and paying customers.
Now the spoofing tool maintainer will be able to to
tell right away if the tool is really generating
realistic traffic, by looking at the market price
of pageviews. The maintainer will even be able to
tell whose tracking the tool can beat, by looking
at which third-party resources are included on the
pages getting paid-for traffic.
The money probably won't be significant, since real
web ad money is moving to whitelisted, legit sites
and away from fraud-susceptible schemes anyway, but
in the meantime it's a way to measure effectiveness.
Setting up a couple of Linux systems to work
which is one of the things that I'm up to at work
FilterBubbler is a
and the setup instructions use
so I need NPM. In order to keep all the NPM stuff
under my own home directory, but still put the
web-ext tool on my $PATH, I need to make one-line
edits to three files.
One line in ~/.npmrc
prefix = ~/.npm
One line in ~/.gitignore
One line in ~/.bashrc
(My /bashrc has a bunch of export PATH= lines
so that when I add or remove one it's more likely to
get a clean merge. Because home directory in git.) I
think that's it. Now I can do
npm install --global web-ext
with no sudo or mess. And when I clone my home directory on another system it will just work.
(This is an answer to a question on
Twitter is the new blog comments (for now) and I'm
more likely to see comments there than to have time
to set up and moderate comments here.)
Adfraud is an easy way to make mad cash, adtech is
happily supporting it, and it all works because the
system has enough layers between CMO and fraud hacker
that everybody can stay as clean as they need to.
Users bear the privacy risks of adfraud, legit publishers pay for
and adtech makes more money from adfraud than fraud
hackers do. Adtech doesn't have to communicate or
coordinate with adfraud, just set up a fraud-friendly
system and let the actual fraud hackers go to work.
Bad for users, people who make legit sites, and civilization in
But one piece of good news is that adfraud can change
quickly. Adfraud hackers don't have time to get stuck
in conventional ways of doing things, because adfraud
is so lucrative that the high-skill players don't have
to stay in it for very long. The adfraud hackers
who were most active last fall have retired to run their
resorts or recording studios or wineries or whatever.
So how can privacy tools get a piece of the action?
One random idea is for an obfuscation tool
to participate in the market for so-called sourced
Fraud hackers need real-looking traffic and are
willing to pay for it. Supplying that traffic is
sketchy but legal. Which is perfect, because put
one more layer on top of it and it's not even sketchy.
And who needs to know if they're doing a good job
at generating real-looking traffic? Obfuscation tool
maintainers. Even if you write a great obfuscation
tool, you never really know if your tricks for helping
users beat surveillance are actually working, or if
your tool's traffic is getting quietly identified on
the server side.
In proposed new privacy tool model, outsourced QA pays YOU!
Set up a market where a Perfectly Legitimate Site
that is looking for sourced traffic can go to buy
pageviews, I mean buy Perfectly Legitimate Data on
how fast a site loads from various home Internet
connections. When the obfuscation tool connects to
its server for an update, it gets a list of URLs
to visit—a mix of random, popular sites and
Set a minimum price for pageviews that's high enough
to make it cost-ineffective for DDoS. Don't allow it
to be used on random sites, only those that the buyer
controls. Make them put a secret in an unlinked-to
URL or something. And if an obfuscation tool isn't
well enough sandboxed to visit a site that's doing
traffic sourcing, it isn't well enough sandboxed to
surf the web unsupervised at all.
Now the obfuscation tool maintainer will be able to
to tell right away if the tool is really generating
realistic traffic, by looking at the market price.
The maintainer will even be able to tell whose
tracking the tool can beat, by looking at which
third-party resources are included on the pages
getting paid-for traffic. And the whole thing can be
done by stringing together stuff that IAB members are
already doing, so they would look foolish to complain
If you want people on the Internet to argue with you, say that you're making a statement about values.
If you want people to negotiate with you, say that you're making a statement about business.
If you want people to accept that something is inevitable, say that you're making a statement about technology.
The mixup between values arguments, business
arguments, and technology arguments might be
why people are confused about Brands need to fire
by Doc Searls.
The set of trends that people call adtech is a
values-driven business transformation that is trying
to label itself as a technological transformation.
Some of the implementation involves technological
changes (NoSQL databases! Nifty!) but fundamentally
adtech is about changing how media business is
done. Adtech does have a set of values, none
of which are really commonly held even among people in
the marketing or advertising field, but let's not make
the mistake of turning this into either an argument
about values (that never accomplishes anything)
or a set of statements about technology (that puts
those with an inside POV on current technology at an
unnecessary advantage). Instead, let's look at the
business positions that adtech is taking.
Adtech stands for profitable
platforms, with commodity producers
of news and cultural works. Michael
Tiffany, CEO of advertising security firm White Ops,
saidThe fundamental value proposition of these ad tech
companies who are de-anonymizing the Internet is,
Why spend big CPMs on branded sites when I can
get them on no-name sites? This is not a
healthy situation, but it's a chosen path, not a
technologically inevitable one.
Adtech stands for making advertisers
support criminal and politically heinous
activity.I'll just let Bob Hoffman explain that
Fraudulent and brand-unsafe content is just the
overspray of the high value platforms/commoditized
content system, and advertisers have to accept
it in order to power that system. Or do they?
People have a lot of interesting decisions to make:
policy, contractual, infrastructural, and client-side.
When we treat the adtech movement as simply
technology, we take the risk of missing great
opportunities to negotiate for the benefit of brands,
publishers, and the audience.
This is a brand new blog, so I'm setting up
the basics. I just realized that I got the
whole thing working without a single script,
image, or HTML table. (These kids today
have it easy, with their media queries and CSS
One big question that I'm wondering about is: how many of the people
who visit here are using some kind of protection
from third-party tracking? Third-party tracking
has been an unfixed vulnerability in web browsers
for a long time. Check out the Unofficial Cookie
FAQ from 1997.
Third-party cookies are in there...and we're
still dealing with the third-party tracking problem?
In order to see how bad the problem is on this site,
I'm going to set up a little bit of first-party
data collection to measure people's vulnerability to
third-party data collection.
The three parts of that big question are:
Can a third-party tracker see state from other sites?
All it does is swap out the tracking image source three times.
When the Aloodo tracking script runs, to check if this browser is blocking the script from loading.
When the Aloodo script confirms that tracking is possible.
The work is done in the setupAloodo function,
which runs after the page loads. First, it sets the
src for the tracking pixel to js.png, then sets
up two callbacks: one to run after the Aloodo script
is loaded, and switch the image to ld.png, and
one to run if the script can track the user,
and switch the image to td.png.
Step three: check the logs
Now I can use the regular server logs to compare
the number of clients that load the original image,
load the two tracking images.
Metalsmith is pretty fun. The basic pipeline from
the article seems to work pretty well, but I ran
into a couple of issues. I might have solved these
in ways that are completely wrong, but here's what
works for me.
First, I needed to figure out how to get text from
an earlier stage of the pipeline. My Metalsmith
build is pretty basic:
turn Markdown into HTML (plus article metadata
apply a template to turn the HTML version into
a complete page.
That's great, but the problem seems to be with getting
a copy of just the HTML from step 1 for building the
index page and the RSS feed. I don't
want the entire HTML page from step 2, just the inner
HTML from step 1.
The solution seems to be
This doesn't actually strip off the template, just
lets you capture an extra copy of the HTML before
templatization. This goes into the pipeline after "markdown"
but before the "layouts" step.
Select Browse, then Browse Local, then select the .qcow2 file.
That's it. Now looking at a virtual MS-Windows guest
that I can use for those troublesome web conferences
(and for testing web sites under MSIE. If you try
the tracking test,
it should take you to a protection page that prompts
you to turn on the EasyPrivacy Tracking Protection
List. That's a quick and easy way to speed up your
web browsing experience on MSIE.)
Andrew Cowie has written something
The main thing that this one does differently is to
ask make which files matter to it, instead of doing
an inotifywatch on the whole directory. Comments and
The process is going to be a little different from
what you might be used to with another OS. If you
shop carefully (and reading blogs is a good first
step) then the drivers you will need are already
available through your Linux distribution's printer
HP has done a good job with enabling this.
The company has already released the necessary
printer software as open source, and your Linux
distribution has already installed it. So, go to
printers fully supported with the HPLIP software, pick a
printer you like, and you're done.
If you want a recommendation from me, the
HP LaserJet 3055,
a black and white all-in-one device,
has worked fine for me with various Linux setups
for years. It's also a scanner/copier/fax machine,
and you get the extra functionality for not much more
than the price of a regular printer. It also comes
with a good-sized toner cartridge, so your cost per
page is probably going to be pretty reasonable.
Other printer brands have given me more grief, but
fortunately the HP LaserJets are widely available
and don't jam much.
It's important not to show a smug expression on your
face while printing if users of non-Linux OSs are
still dealing with driver CDs or vendor downloads.
When you give travel directions, you include
landmarks, and "gone too far" points. Turn left after
you cross the bridge. Then look for my street and
make a right. If you go past the water tower you've
gone too far.
System administration instructions are much easier
to follow if they include those kind of check-ins
there, too. For example, if you explain how to set
up server software you can put in quick "landmark"
tests, such as, "at this point, you can run nmap and
see the port in the results." You can also include
"gone too far" information by pointing out problems
you can troubleshoot on the way.
A full-scale troubleshooting guide is a good idea,
but quick warning signs as you go along are helpful.
Much better than finding yourself lost at the end of
a long set of setup instructions.
doesn't accept dotted quads for ranges, but
fortunately most of the commands that accept an IP
address will also take it in the form of a regular
decimal. (Spammers used to use this to hide their
naughty domains from scanners that only looked for
the dotted quad while the browser would happily go to
So here's an ugly-ass shell function to convert an
IP address to a decimal. If you have a better one,
please let me know and I'll update this page. (Yes,
I know this would be one line in Perl.)
if [ $(echo $1 | grep -q '\.') ]; then
dq2int $(echo $1 | tr '.' ' ')
elif [ $# -eq 1 ]; then
total=$1; next=$2; shift 2
dq2int $(($total*2**8+$next)) $@
It says "Personal and Confidential" or "IMPORTANT
CORRESPONDENCE REGARDING YOUR OVERPAYMENT" on the
envelope—can you really discard it without
opening it? You sure can. Some junk mailers disguise
their mail pieces as important correspondence from
companies you actually do business with, and the
USPS helped them out a lot by renaming "Bulk Mail"
to "Standard Mail". But you can look at the postage
to discard "stealth" junk mail without opening it.
that any bills or mail containing specific
information about your business relationship with
the company must be mailed First Class.
So, if "Standard Mail" or "STD" appears in the upper
right corner, it's not a bill, it's not your new
credit card, and it's not a check. It's just sneaky
All that is really needed on computers
is a "Calculate" button or omnipresent menu command
that allows you to take an arithmetic expression,
like 248.93 / 375, select it, and do the calculation
whether in the word processor, communications package,
drawing or presentation application or just at the
Fortunately, there's a blue "Access
IBM" button on this keyboard that doesn't do much.
So, I configured tpb
"Access IBM" do this:
If you want to do this, besides tpb, you'll need xsel and xte, which is part of xautomation. If you don't have an unused button, you could also set up a binding in your window manager or build a big red outboard USB "eval" button or something.
If you make a new ssh key and try to use it with ssh -i while running ssh-agent, ssh tries
the agent first. You could end up using a key provided by the agent
instead of the one you specify. You can fix this without killing
the agent. Use:
The most important part of picking a distribution
is thinking about where you will go for help, and
what distribution that source of help understands.
That's true if your source of help is a vendor,
a consultant, or a users group.
If you're getting into uses for Linux that are
different from those of your local user group,
it's more important to use a list of people like
you than just the geographically closest user
group. For example, if you're planning to set
up a Linux-based recording studio and your local
LUG is all about running web sites and playing Crimson Fields, you might want to get on the
Planet CCRMA mailing list, and get your Linux
distribution recommendations there.
If you have a script that uses ssh, here's something
to put at the beginning of the script to make sure
the necessary passphrase has already been entered, and the
remote host is reachable, before starting a time-consuming
operation such as an rsync.