How to prevail when technology fails - Flipbook - Page 19
Beyond the law: How to address technology’s ethical challenges | 19
Almost half of
businesses do not vet
for technology bias
Bias in data and programming is survey
respondents’ second-most important ethical issue,
and it is easy to see why. Created by humans,
technology can reflect the biases – conscious or
subconscious – of its creators, and sometimes
biases only become apparent after the technology
is deployed.
Discrimination often comes up in relation to the use
of algorithms and AI technology to scan and review
CVs in the recruitment process. There are concerns
that the algorithms underpinning this software
incorporate biased logic and therefore discriminate
against people who live in particular areas or have
certain names.
If technology is purchased rather than developed,
you may not know whether it contains biases.
The very least you should do is seek warranties
and assurances that procured software does not
contain biases, and conduct due diligence to check
it. However, almost half of the businesses in the
research do not currently vet for technology bias.
Another problem is a lack of representative data,
which can cause technology-enabled products to
perform badly for some sections of the population.
Research in the U.S. has found that the error rates
for facial recognition software developed by multiple
companies are much higher for African American
and Asian faces than for Caucasian faces1.
In another example, consumer reviews and media
reports say that certain brands of wearable health
devices monitor the heart rates of people of color
far less accurately2. That not only creates an inferior
product, but it could also entrench bias further if
data from these wearable devices is used to inform
the development of other products.
Fig 12
Strongly disagree
Strongly agree
Disagree
Agree
2% 7%
43%
48%
Q. To what extent do you agree with the following statement? “We check
that technology supplied to us has been vetted to not include any biases.”
“Businesses purchasing software should, if relevant, ask the
provider what they have done to eliminate bias against certain
population groups. These conversations happen a lot in the U.S.,
and it’s starting to pick up in Europe.”
Desmond Hogan | Head of Global Litigation, Arbitration and Employment, Hogan Lovells
1. MIT Technology Review, A U.S. government study confirms most face recognition systems are racist, December 2019
2. STAT, Fitbits and other wearables may not accurately track heart rates in people of color, July 2019