(Note: We'll be on assignment tomorrow, returning Thursday.) Latanya Sweeney, a Harvard professor of government and technology, had an unnerving experience when she did an online vanity search, reports the MIT Technology Review:
When she entered her name in Google an ad appeared with the wording:"Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly. www.instantcheckmate.com"This is suggestive wording. It suggests that Latanya Sweeney has a criminal record the details of which can be accessed by clicking on the ad. But after hitting the link and paying the necessary subscription fee, Sweeney says she found no record of arrest.What a relief! Imagine how upsetting it would be to find out that you'd been arrested and had no memory of it. But Sweeney, whose forename is "suggestive--that she is black," wondered "whether a similar search with a name suggestive of a white racial profile also serves up ads mentioning arrest records."
Enlarge Image
Close Associated PressShe conducted an extensive study, published last week, in which she compared "black" names like "Trevon, Lakisha and Darnell" to "white" ones like "Laurie, Brendan and Katie." The result: "Most names generated ads for public records. However, black-identifying names turned out to be much more likely than white-identifying names to generate ads that including [sic] the word 'arrest' (60 per cent versus 48 per cent). All came from www.instantcheckmate.com." According to her analysis, the odds are less than 1 in 1,000 that this disparity is caused by chance.
This isn't the first study to measure disparate treatment of "white" and "black" names. A decade ago, as a press release from the University of Chicago details, a pair of professors responded "to 1,300 help-wanted ads listed in the Boston Globe and the Chicago Tribune." They sent prospective fake r�sum�s "with comparable credentials for each racial group. Each r�sum� was randomly assigned either a very white-sounding name (Emily Walsh, Brendan Baker) or a very African-American-sounding name (Lakisha Washington, Jamal Jones)."
The result: R�sum�s with white-sounding names were about half again as likely to yield callbacks as those with black-sounding names. The gap was wider the more impressive the credentials were. (As an aside, are we alone in thinking it isn't entirely ethical to send employers fake r�sum�s?)
The r�sum� study is often cited as evidence of "implicit racism." One objection to this argument is that to the extent that the results reflect employers' prejudice, it isn't prejudice against blacks but against distinctively black names. Such names have a political context, as Sweeney acknowledges in her study:
Work by [Roland] Fryer and [Steven] Levitt .�.�. shows a pattern change in the way Blacks named their children starting in the 1970's, which they correlate with the Black Power Movement. They postulate that the movement influenced how Blacks perceived their identities and they give as evidence that before the movement, names given to black and white children were not distinctly different, but after the movement, .�.�. distinctly black names appear.Obviously it isn't fair to judge people on the basis of the names their parents gave, and in this case it has a disparate racial impact (tautologically). But it isn't exactly the same thing as judging people on the color of their skin.
The curious thing about Sweeney's study, though, is that it yields similar results from an automated process. Why it does so is beyond the scope of her study, but she offers some speculation:
Google understands that an advertiser may not know which ad copy will work best, so an advertiser may give multiple templates for the same search string and the "Google algorithm" learns over time which ad text gets the most clicks from viewers of the ad. It does this by assigning weights (or probabilities) based on the click history of each ad copy. At first all possible ad copies are weighted the same, they are all equally likely to produce a click. Over time, as people tend to click one version of ad text over others, the weights change, so the ad text getting the most clicks eventually displays more frequently. This approach aligns the financial interests of Google, as the ad deliverer, with the advertiser.If that hypothesis is accurate, the observed bias has a human source: the patterns of clicks upon whose basis the algorithm serves one ad or the other. But there is no reason to think these clicks reflect bias on the part of the clickers. When Sweeney clicked on the link with her name in it, she didn't do it because it sounded black but because it was her name.
Related Videos
No comments:
Post a Comment