What can go wrong ethically with AI?
Artificial intelligence systems ‘learn’ based on the data they are given.
This, along with many other factors, can lead to biased outcomes. Without careful attention, there is a high risk that AI systems will reflect the status quo or generate blind spots.
The implication of this can mean that some groups of people are discriminated against on the basis of factors such as race, age, gender, ethnicity and ability.
Similarly, in some US states, algorithms and artificial intelligence are used to help decide prison sentences. One such program is COMPAS, designed by Northpointe. Evidence has emerged of several cases where the accuracy of predictions skews on the basis of race.
Black offenders are more likely to be deemed ‘high risk’ than white offenders also applying for parole.
Although race is not one of the metrics COMPAS is coded for, the end result is racially skewed. Black people tend to receive longer punishments than white people for the same offenses.
Sounds too good to be true?
How do you avoid or remedy bias in AI systems?
Building diversity into the design process is key.
Unconscious biases thrive in homogenous thinking spaces.
By including diverse teams in the design process – including diversity of gender, race, ability, class, and culture – designers can reduce the likelihood of biases being embedded into AI systems.
Ellen Broad, Made By Humans: The AI Condition, (Melbourne: Melbourne University Press, 2018)
West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/discriminatingsystems.pdf