notes on a California advertiser protection bill
Why are people in California exposed to so many risks from illegal activity on the Internet? Is there a way that the California State Legislature can address the problem in a way that is compatible with the First Amendment and with Federal law?
Today, Internet platforms are hosting a disturbing variety of illegal activity, including
High-profile threats of national or global scope
campaign finance violations including foreign misinformation
domestic terrorist event information
international terrorist recruiting material
Internet economy issues
brand data leakage
child abuse depictions
identity theft schemes
pyramid schemes and other scams
Although the main victims are users, the under-rated victims are the advertisers who unknowingly pay for illegal activity.
Would you want your brand name associated with Internet crime? Worse, your family name? For many small businesses, the brand name is the family name.
California State Legislature members, when you buy social media advertising to reach your constituents, do you want those ads to run on a child abuse video? How about on a social page that manages invitations to an event organized by a domestic terrorist group? Or an pirate copy of a movie or music video made by artists you represent?
Advertisers pay the bills for illegal activity because they lack the information they would need to stop doing so. It's time for Internet platforms to stop hiding this information from them.
Internet platforms use brand advertisers' money to sponsor illegal activity without disclosing it.
A small business person might buy an ad with the intent of reaching neighbors, then have their ad sponsor illegal activity without ever knowing it. Even if the Internet platform later decides to moderate or limit distribution of content related to illegal activity, the advertiser is generally completely unaware of how their brand was used.
Advertisers seldom choose to sponsor illegal activity and typically pull their ads when they do discover it. However, Internet platforms are failing to inform advertisers when they sponsor illegal activity, even when advertisers are clearly taking reputational risks on content that the platform companies themselves will not associate with their end-user-facing brands.
On typical Internet platforms, it is impractical for advertisers, especially small businesses, to monitor every piece of content that their ads support. Ads and content are connected automatically and users may see the same ad in association with many different pieces of content. A disclosure requirement can help.
Simple, limited disclosure requirements
When content is removed or limited in circulation, disclose to the advertisers affected:
What content was removed
On what grounds the content was removed
How long the content stayed up
How many users were affected, if known
The amount that the advertiser was charged
If the illegal content was another advertisement, disclose to nearby advertisers. For example, if a real mortgage broker had their ad run along with an ad for a cryptocurrency scam, and the cryptocurrency scam ad is removed, the mortgage broker should be informed.
If an Internet platform cannot determine some of this information, they should not be required to collect it. For example, if the platform delivering the ad does not have unique identifiers for users affected, they should be allowed to omit the user count. Nothing in this law should require any Internet platform to collect or store additional data on users.
If an Internet platform sells advertising through an intermediary, it would be burdensome to require the platform to locate and notify the end client. Probably better to require the intermediary to pass the notice along.
Disclosure should include violations of laws that apply to the affected users, even if they do not apply to the advertisers. For example, if a California advertiser's ad ends up on a video that is not allowed to be displayed in Germany, and the ad is shown to users in Germany before the video is blocked there, then the California advertiser should be notified even if the video is still available to users in California. This is about protecting a brand owner's reputation as seen by the audience they are paying to reach, not about the speech-related laws of particular countries.
Inform advertisers, don't restrict them
Some advertisers may choose to continue to support content that is later removed for legal reasons. For example, fans of the rap group Insane Clown Posse were listed as a gang by the National Gang Intelligence Center. This listing could result in policy violations and content deletions affecting those fans on social sites. If an advertiser chooses not to take action on notifications and continues to support content related to Insane Clown Posse, no new law should interfere with their ability to do so. Social platforms also remove or restrict some content in error, including content that advertisers would want to continue to support.
An advertiser transparency bill must be drafted in such a way as to preserve the ability of advertisers to support Internet activity _that they are aware of_, while at the same time protecting their reputation from being associated with illegal activity they choose not to support.
Copy language from other, already tested laws:
definition of platforms that must notify
definition of notification
How to define an association between ad and content: borrow viewability standards from IAB Measurement Guidelines?
More: this document on GitHub