A Watchful Eye on Facebook’s Advertising Practices
New York Times Opinion Article by Olivier Sylvain
Is the social media giant finally facing consequences for ads that discriminate?
Before the Department of Housing and Urban Development on Thursday announced that it has charged Facebook with violating the Fair Housing Act by enabling advertisers to engage in housing discrimination, Facebook said that it would change its ad-targeting methods to forbid discriminatory advertisements about housing, employment and credit opportunities. This plan, announced last week, is part of its settlement agreement with the civil rights groups that filed suits against the company over the past few years. The substantive terms are not radical. But they outline a basic framework for how policymakers might begin thinking about reforming big tech in ways that are suited to our times.
Ever since reporting revealed in October of 2016 that Facebook allowed advertisers to exclude users by race, the company has denied any legal wrongdoing. If there is a problem under law, its leaders have said, it is with advertisers. That is why Facebook could claim that it was attending to discrimination by requiring advertisers to certify that they were not violating civil rights laws.
But this was always unconvincing because until last week, Facebook made it possible for advertisers to discriminate against users if they fell into a “multicultural affinity” classification like “Mundo Hispanico” or “Hispanic.” These categories are obvious proxies for race, ethnicity and religion. While people do not have to be a member of a particular group to fall into one of these classifications, the likelihood that they do is significant enough to justify them. This is, in part, why HUD filed a lawsuit against the company.
What is important here is that Facebook does not explicitly ask for users’ race or ethnicity to define this groups. Instead it creates them out of the enormous amounts of data that it collects, like friend networks, “likes,” location, in-app browsing patterns, as well as creepy, innocuous information like offline browsing activity, even of people without accounts.
Of course, race and ethnicity have always been valuable signals for marketers. You would expect that El País would target its programming to Latinos and that MAC would market the darker shades of its makeup to African-American women.
Building managers and employers could also use the “multicultural affinity” feature to discriminate against members of racial minority groups and older people in their ads about housing and jobs.
Here is the catch: Magazine publishers and cosmetics companies may legally target potential customers on the basis of such traits, but civil rights laws plainly forbid large building owners to indicate such information in ads about housing and employment.
Under the settlement, Facebook did not admit to wrongdoing, but it did agree to no longer make gender, age or “multicultural affinity” targeting available to advertisers in markets for housing, employment, and credit. It also agreed to administer a separate new portal for ads for those areas, in which advertisers would never have the option to exclude or include audiences on the basis of traits like race, sex, age and proxies for those traits like ZIP code.
The settlement also prohibits Facebook from continuing to use race and other protected categories in “look-alike” targeting in housing, employment and credit markets. That technique enables an advertiser to reach users whose characteristics resemble those of its existing customers. In other words, the broker for a co-op building that is predominantly white and middle-aged can no longer use the “look-alike” feature to post an apartment opening because it would effectively discriminate against people who do not fit the mold.
The devil will be in the details of execution, of course, but with these settlement terms, we have come a long way since Facebook’s flat denials. The civil rights groups who filed the litigation will be monitoring the effects of Facebook’s changes for the next few years.
Still, the reforms embodied in the settlement are hardly enough. First, even if Facebook substantially narrows the scope of targeting, there is no reason to believe that its algorithms might not revert to discriminatory ad distribution patterns. How will Facebook respond when, for example, its algorithms “learn” that mostly younger white men click through a housing advertisement in a predominantly white neighborhood?
We might suspect that Facebook could moderate against the possibility of algorithmic bias in instances like that by just editing the algorithm. But this points to another limitation in the agreement: Facebook does not have to be transparent about the actual workings of its algorithms.
Presumably, the lawsuit that HUD filed Thursday will ensure that a watchful eye remains on the company’s advertising practices. The best the settlement does on the question of transparency is provide a process through which civil rights groups and researchers may intermittently evaluate the ways in which Facebook administers ads. It also requires Facebook to develop another portal through which all users may search and review all housing advertisements, whether or not Facebook serves them to their news feeds. But ultimately, none of these transparency reforms will ensure that Facebook’s algorithms will stop engaging in illegal discrimination.
Despite these limitations, there is much in the settlement that is worth applauding. At a minimum, it sets out the blueprint for how to regulate big tech going forward.
For one, it acknowledges that Facebook can separate parts of its advertising service from other parts of its vast platform. It is possible to impose structural limits on how it and other big tech companies like Google and Amazon leverage the two-sided markets that they straddle. In light of recent proposals by at least one presidential candidate and excellent legal scholarship on the question, this aspect of the agreement should encourage policymakers to plausibly call for structural fixes to the services that big tech provides. Policymakers should consider limitations on what online companies may do with user data after they collect it, like the requirements of Europe’s new data protection law.
The other important lesson from the settlement is not about any of its specific terms. It is that policymakers should ensure that Facebook and other big tech companies continue to contemplate the real threat of litigation. The lawyers who represented plaintiffs filed complaints that plausibly forced Facebook to revamp its powerful advertising business model. But this is a rare case.
Under the 1996 Communications Decency Act, online intermediaries have enjoyed a broad immunity from liability for the illicit online conduct of their users and advertisers. Boosters of this generous protection explain that requiring online companies to moderate the tons of data coursing through their servers would impose a heavy burden. That burden, they argue, would encourage companies to stifle more expressive conduct than the law requires. This view resonated widely in the mid-1990s, when many people believed that the internet was the last best chance for an open and democratic forum for user-generated content. But as we read story after story about intermediaries’ craven designs on user data, this view must give way to a fresh approach.
The internet is no longer just a medium for newsgroups and electronic bulletin boards, which were the conduits for authentic user-generated content that the drafters of the C.D.A. had in mind two decades ago. Today, intermediaries design services that control practically all aspects of users’ online experiences. Targeted advertising is just one example. We should accordingly expect them to take far more responsibility for what happens on their platforms.
The protection under the C.D.A. demonstrably lulled Facebook into complacency. Its managers did not feel the urgency to adequately answer complaints that its advertising service (not just advertisers) violated civil rights laws until plaintiffs’ legal claims showed the world how. Narrowing the scope of protection under the C.D.A. would help to make sure that Facebook and influential companies like it are beholden to the laws that protect the most vulnerable among us, or at least stay vigilant to the cause. Details for such reform are not easy to sort out. This settlement is a good start.