Facebook, brown M&Ms, and skin in the game
12 January 2020
(update 20 Jan 2020: add link to Bill Fitzgerald's blog post.)
(update 18 Jan 2020: add embedded Tweet of fake McDonald's ad.)
This is the long answer to the question: why are you tweeting screenshots of stupid Facebook scams? On Twitter it might look like I'm just randomly talking shit about Facebook, but I do have a point here.
Here's an example of a Facebook Page whose owners uploaded my contact info without permisssion. It doesn't look like that's really their logo, either.
I didn't set out to look for scam advertisers on Facebook. I did visit "Settings" → "Ads" there, in order to make sure to send a CCPA data deletionCCPA letter, shell script to some well-connected nodes in the surveillance marketing network. Hitting the cute little "Do Not Sell" button on content sites is a lot of effort for a little CCPA win, so the best CCPA strategy is to focus on the big players, such as.
Those were some of the companies I expected to have my info, and sure enough I found them in Facebook ad settings. So they're on my CCPA list. I don't expect companies like that to make the CCPA process easy, but I will do my part for the California creative boom of 2020.
Anyway, back to Facebook scams. While I was making my CCPA list, I also saw a bunch of Facebook advertisers like the Amazon one above, and this fake "Gap Inc." This one not only uploaded my info, but also got Facebook to let them match it against Facebook's existing user data, and re-sell the result.
What's the big deal, though? They're just regular scams, and I don't buy stuff off of Facebook anyway.
Here's why I think it matters. Obvious scams are a helpful way to see how well Facebook is enforcing its own policies on ads. Van Halen didn't really dislike brown M&Ms, but their contract for live shows included a section banning brown M&Ms backstage. The part about the brown M&M was buried in the middle of a bunch of technical and safety requirements for the show. If the band saw a brown M&M, it was a warning to check again for harder-to-find safety issues.
On Facebook, a lot of the worst problems are the hardest to see. Any halfway decent state-sponsored political misinformation operation is going to be effectively invisible to me, and to academic and NGO researchers, even with Facebook "ad transparency." The misinfo people have probably been uploading a bunch of variants on the same ad creative, to make it impractical to check it all. They have an inoffensive, generic name, use a US-based Facebook user with a clean account to be their point of contact, and carefully filter their Custom Audiences to key purple-state voters. And, as long as they don't tag their ads political, and take them down before anyone reports them as political, the ads won't be available afterward in Facebook's Ad Library. Bill Fitzgerald explains in a blog post summarizing a recent Twitter Q&A session.
Can we take Facebook's word for it that they're doing anything about sneaky, invisible state-sponsored misinformation? I doubt it—when they're serving a big bowl of brown M&Ms, in the form of obvious scams, to everyone who looks at their ad settings page. It's hard to believe that they take an invisible problem seriously when the visible problems that would get fixed as a side effect of addressing it are still there. (And, of course, when deceptive ads serve the company's own interests. Facebook is pushing an ambitious cryptocurrency scheme that depends on approval from US regulators, and the results of the 2020 election will decide who those regulators are.)
Facebook has two sets of rules for deceptive ads: the written rules that they show to media and the government, and the unwritten rules that they teach to their scam and misinfo advertisers by example. The unwritten rules, which encourage deceptive advertising, matter. The written rules, not so much.
Hypothetially, what would Facebook do if the company's true intent on misinformation matched the written ad policies they claim to enforce? They would deploy a few fairly basic "skin in the game" fixes.
No more credit card payments for advertising, invoices only, net 90 If an advertiser is so untrustworthy that Facebook doesn’t even know they’ll pay their invoice, then that advertiser is not trustworthy enough to put in front of a user.
Rewards for reporting violations. If a user reports an ad that violates a policy, and the advertiser gets kicked off, then let the user keep the ad money that came in from that advertiser. Include the owners of email addresses and phone numbers who get added to Custom Audiences without consent in the program. (And no, this does not incentivize users to deliberately post scam ads and report them, because they would just get their own money back.)
Notify advertisers when their ad ran on content, or near an ad, that is removed for policy violations. Right now a lot of important brand safety issues are hidden, because advertisers can't see context. Give the legit advertisers the visibility they need to decide how much brand safety risk to take.
Top management at a large organization cannot micromanage for trust and quality. But they could, if they wanted to, set up the culture and incentives to make it important to all decision makers at the organization. Right now Facebook is set up to encourage and profit from deceptive advertising while imposing deceptive advertising's costs and risks on their users and on society in general.
All right, enough fun with zany scams, back to CCPA-ing any real company that put me in a Custom Audience.
I can’t believe Facebook approved this obviously fake McDonald’s ad pic.twitter.com/SRPtV0M2wF— Jane Manchun Wong (@wongmjane) January 12, 2020