By ALEXANDRA REEVE GIVENS: For More Info, Go Here…
Though a huge portion of the population lives with a disability, it comes in many different forms, making bias hard to detect, prove, and design around.
A hiring tool analyzes facial movements and tone of voice to assess job candidates’ video interviews. A study reports that Facebook’s algorithm automatically shows users job ads based on inferences about their gender and race. Facial recognition tools work less accurately on people with darker skin tones. As more instances of algorithmic bias hit the headlines, policymakers are starting to respond. But in this important conversation, a critical area is being overlooked: the impact on people with disabilities.
A huge portion of the population lives with a disability—including one in four adults in the U.S. But there are many different forms of disability, making bias hard to detect, prove, and design around.
In hiring, for example, new algorithm-driven tools will identify characteristics shared by a company’s “successful” existing employees, then look for those traits when they evaluate new hires. But as the model treats underrepresented traits as undesired traits to receive less weighting, people with disabilities—like other marginalized groups—risk being excluded as a matter of course.
One famous example of this arose when Amazon trained a resume screening algorithm by analyzing resumes the company had received over the past 10 years. Reflecting the disproportionately higher number of men who apply to Amazon, the algorithm learned to downgrade resumes that included terms such as “women’s” (as in “women’s chess club captain”) or reflected a degree from a women’s college. Amazon tweaked the tool, but the lesson is clear: If an algorithm’s training data lacks diversity, it can entrench existing patterns of exclusion in deeply harmful ways.
This problem is particularly complex when it comes to disability. Despite the large number of people living with disabilities, the population is made up of many statistically small sets of people whose disabilities manifest in different ways. When a hiring algorithm studies candidates’ facial movements during a video interview, or their performance in an online game, a blind person may experience different barriers than a person with mobility impairment or a cognitive disability. If Amazon’s algorithm training sample was light on women, it’s hard to imagine it effectively representing the full diversity of disability experiences. This is especially true when disabled people have long faced exclusion from the workforce, because of entrenched structural barriers, ableist stereotypes and other factors.
Despite these challenges, some vendors of AI hiring tools are marketing their products as “audited for bias,” seeking to reassure employers in the face of mounting concerns. But many of these companies rely on outdated guidelines that the Equal Employment Opportunity Commission published in the 1970s. These guidelines spell out the so-called “four-fifths rule,” which looks to see whether a hiring test selects a certain protected group at a lesser rate than its majority counterpart (for example, if female candidates are being selected with a pass rate of 80 percent or less of the pass rate for men, or black candidates with a pass rate of 80 percent or less than the pass rate for their white peers). If an audit uncovers that an algorithm is failing the four-fifths rule, the vendor will tweak it for better parity—and present their tool as “bias audited.” But this simplistic approach ignores the diversity of our society. Most critically, it looks only at simplified (and U.S.-defined) categories of gender, race and ethnicity.
The flaw in this approach becomes clear, again, if you consider disability…