some ways that Facebook ads are optimized for deceptive advertising
29 December 2020
(updated 14 Nov 2021: deception avoidance and value exchange)
(updated 20 Sep 2021: more material on Ad Library, problem categories, links)
(updated 7 Jan 2021: added intermediary for Custom Audiences trick)
Why are there so many scam ads on Facebook? The over-simplified answer is that Facebook just doesn't have enough ad reviewers for the number of ads they get. Since basically anyone with a credit card can advertise, and advertisers have access to tools for making huge numbers of ad variations, then of course lots of scam ads are going to get through.
Facebook is also more attractive to scammers than other ad media. Deceptive advertisers already get more value from highly targetable ad media than honest advertisers do, because targeting gives the deceptive advertiser an additional benefit. Besides helping to reach possible buyers, a deceptive advertiser can also use targeting to avoid enforcers of laws and norms.
Understaffing and targeting are only parts of the story, though. It's not so much that Facebook is uninterested in dealing with scams, it's as if their ad system in general is the result of a cooperative software development project with the scammers. do Facebook and their scam advertisers constitute an "enterprise" for purposes of RICO? I don't know, might be worth asking your lawyer if you got scammed or impersonated, though Some of the deliberate design decisions that went into Facebook ads are making things easier for deceptive advertisers at the expense of users and legit advertisers.
Custom Audiences don't support list seeding. Until Facebook, every direct marketing medium has supported "seed" records, which look like ordinary records but get delivered back to the list owner or someone they know, so that they can monitor usage of the list. (I used them for a biotech company's postal and email lists, even though we never sold or shared the list. Just to be on the safe side.) Using seed records is a basic direct marketing best practice and deters people who might see your list from misusing it.
Facebook Custom Audiences are a way for scammers to use a stolen list without detection. Facebook Ad Settings lets a user see if they personally are in someone else's Custom Audience, but there's no way for a list owner to check if the seed records from their list ended up on one. Someone who steals a mailing list can sneak it into a new Custom Audience without getting caught by the list owner. Legit direct marketers who want to protect their lists would pay for the ability to use seed accounts on Facebook, but this functionality would interfere with Facebook's support for scam advertisers, so they don't offer it, or even allow anyone else to provide seed accounts. (A limited number of Test Users are allowed for app development, but these are not usable as seeds. Facebook uses the term "seeds" differently from the conventional meaning, to mean the starting names for a Lookalike Audience)
Users can be blocked from seeing the company that
really controls the targeting lists that they're on.
Suppost that a dishonest advertiser wants to use a
California resident's PII, but they don't want to have
to honor CCPA opt outs or register with the state.
transparency and allows users to see who has
uploaded their info.
But the dishonest advertiser can simply send the hashed versions of
the PII on their list to an intermediary firm,
and have that firm transfer the hashed PII to Facebook. Now when someone who is on
the list goes to "Advertisers using your activity or information" on
Facebook, they see the name of the intermediary firm instead. Even if a
bunch of people on the list do opt out, the deceptive advertiser's own
copy of the list is intact. When they switch to a different intermediary
firm later, there are no opt-outs associated with the list.
This also seems to be a good way for extremely suspicious-looking
advertisers to hide from people who might report or investigate them.
If I check Facebook for exclusion lists used by scammers who think I might report them,
I see only the name of a generic-sounding targeted ad company, not
the actual dishonest Facebook page.
Ad Library helps hide deceptive ads at times when risk of discovery is high. Facebook's Ad Library is designed to show only "active" ads, those that are running this very minute. A deceptive advertiser using a trademark or a person's likeness without permission can simply turn their ad on and off based on when the victim is likely to be checking the Ad Library. For example, a seller of infringing knock-offs of a European brand can run the ads when European marketers, lawyers, and regulators are asleep but people in the Americas or Asia are awake and shopping. Ad Library makes it easier for scammers to copy honest advertisers than the other way around.
Ad Library delays posting of scam ads. If you see a bunch of similar scam ads popping up, like this...
...but then you go to their Ad Library and get
This advertiser isn't running ads in any country at this time,
read the fine print.
An ad will appear in the ad library within 24 hours from the time it gets its first impression. Any changes or updates made to an ad will also be reflected in the ad library within 24 hours.
Facebook deliberately gives their scam advertisers almost a full day to take a whack at you before revealing their ads in Ad Library (and, of course, if the ad comes down fast enough, it never shows up there.)
Independent crawling of ads is blocked by policy. On the open web, online ads can be crawled and logged by independent companies. This service is needed in order to check for malvertising and other problem ads. Inside the Facebook environment, however, independent checking on ads is prohibited. Facebook puts the goal of hiding problem ads ahead of facilitating the kinds of services that could help fix the situation.
Image search crawlers are blocked from ads. Many scammers make infringing copies of material from legit ads without permission. Pirated product photos are especially common. The photos in those scam ads above appear to have been taken from a legit retailer. If legit advertisers had the ability to search for ads similar to theirs, or for edited copies of their own photos, they would be able to find a lot. But, for example, TinEye is blocked from Ad Library, to make life easier for Facebook's deceptive advertisers at the expense of legit ones. Wells Fargo has to ask customers to report fake Wells Fargo because Facebook cooperates with scammers pretending to be Wells Fargo, to hide fraudulent uses of Wells Fargo's trademarks.
Categories of scams to look for
The reason that Facebook has to try to shut down research programs like NYU's is that a project with the budget and skills of a small university team could pick up on a bunch of obvious scams with some tools based on existing open-source image matching software.
photos of public figures who do not endorse a particular category (such as personal finance experts on cryptocurrency ads)
well-known company logos (needs manual check, sometimes the advertiser is a dealer using the logo with permission)
rental housing scams—look for the same house or apartment photo showing up in ads from multiple landlords
It's not clear why a large company would choose to cooperate with deceptive advertisers. This decision might have to do with the fact that Facebook has lots of eyeball-minutes that are hard to sell to the legit market. As Bob Hoffman has been saying for a while, the ad business has a long-running problem of avoiding advertising to older people. Any online forum except the youngest and hottest is going to fill up with older users whose ad impressions are less valuable to marketers. Facebook could be making a short-term revenue-maximizing decision to try to monetize these users better by temporarily filling up the ad spots with scams, and only cleaning up bit by bit when they have to.
Deception avoidance and value exchange
Or maybe it's not a short-term decision after all. What if the deceptive ads are a necessary part of the system?
A common, conventional point of view about surveillance marketing is that
people choose to trade information about themselves for better-targeted ads.
But this is oversimplified even if you don't get into the details of whether or not people
give actual consent to the exchange.
Realistically, there aren't enough well-targeted ads trying to reach you at any one time to make the
more relevant ads
better enough that even a high-status user would notice.
If the Facebook ad system is run at capacity, then as a user you're generally going to be getting mostly ads that are not perceptibly
well-matched, but still revenue positive for the company.
Allowing a certain percentage of deceptive ads changes the balance. With enough deceptive ads in the system, it becomes a better move for a high-status user to reveal more information. Revealing information might be able to get you enough additional legit ads that the level of risk and annoyance you experience moves down noticeably.
So—even in a idealized consent-based future technical and regulatory environment, where users can't be easily deceived into giving up more
information than they prefer to—some rational high-status users might choose to trade away some personal information
in order to attract more legit ads and fewer scams. Facebook doesn't have to do anything drastic like offering reduced ad load in exchange for
allowing better-matched ads, they can just let you
buy your way out of some scams with data.
Salomé Viljoen writes, in A Relational Theory of Data Governance,
[P]eople have a collective interest against the unjust social processes data flows may materialize, against being drafted into the project of one another’s oppression as a condition of digital life, and against being put into data relations that constitute instances of domination and oppression for themselves or others on the basis of group membership.
This ad system might be a good example of that kind of project. A Facebook user who chooses to avoid scams by providing data on their membership in a high-status group is diverting the scams that they would have gotten onto other people, both members of low-status groups and members of high-status groups who share less data.
The question of scam load and total ad load is different from the competition questions around total ad load. In a hypothetical competitive market for social networking services, companies could compete on ad load, but with network effects and winner take all market effects, a monopoly network can run at a higher ad load than, say, a single ad-supported service that participated in a federated system of intercommunicating social sites.
There are some lessons here for the rest of us. When designing new post-cookie ads for the web in general, though, it will be more and more important to avoid the kind of design decisions that Facebook has made. Facebook is highly profitable running deceptive ads today, but as a single company they can unilaterally change their system relatively quickly. All the items above would be small code or policy changes whenever they decide to cut down on scams. For the open web, fixes that need to involve code and business agreements from more companies would be harder.