Trust is earned in drops and lost in buckets.

Over the past few years, Meta has lost buckets of trust with consumers, advertisers, and especially regulators. In an attempt to regain some trust, yesterday Meta announced its plans to ensure fairness in how it distributes ads for housing. The announcement is part of a settlement with the US Department of Justice over charges brought by the Department of Housing and Urban Development (HUD) that it enabled housing advertisers to discriminate on protected characteristics such as race and disability status. And in the grand scheme of things, it amounts to a drop in an otherwise leaky bucket.

In the announcement, Meta essentially proposes two solutions to bias in advertising:

  • To eradicate explicit discrimination, Meta has sunsetted the use of problematic filters such as “assistance dog” or “interest in Latin America.” These filters are what got the company into trouble with HUD in the first place. Their removal is a long overdue step in the right direction.
  • To eradicate the much thornier problem of implicit discrimination, Meta has announced the Variance Reduction System to “advance the equitable distribution of ads on Meta technologies.” While this is also a step in the right direction, advertisers need to realize that by using it, they are implicitly agreeing to Meta’s definition of fairness. Mathematically speaking, there are 21 different ways of representing “fairness.” Here, Meta seems to be optimizing equal “accuracy” across groups — to give a simple example, men and women who are equally eligible should have the same likelihood of seeing the ad. The question is who determines “eligibility”? This approach ignores all the historical inequities codified in the data that these systems rely on.

The announcement also contains a solution to the inherent catch-22 in AI fairness: Namely, how can we measure bias across groups if we’re not allowed to use protected attributes in the algorithm? In other words, how do you detect racism if you can’t identify race? Meta’s solution to this problem — Bayesian Improved Surname Geocoding — seems impressive, but it’s actually a method long used by banks to show that they are not redlining. It uses a combination of surname and zip codes cross-referenced with census data to approximate race. While this approach flies with regulators and works at an aggregate level, these ads impact individuals. Aggregated accuracy still allows for individual errors and therefore individual harms.

Finally, in the announcement, Meta seems to be patting itself on the back for its plans to extend this approach to employment and credit ads in the future. This qualifies as “reactive proactivity” — we’re going to need to do this anyway, so we might as well do it now and try to get credit for it.

No brand wants to be in Meta’s position. And if you’re careful, you won’t be. By adhering to emerging best practices for fairness in AI, you can develop AI systems that mirror your corporate values.

If you’re trying to navigate the complex world of AI fairness, please feel free to reach out via inquiry.