Bad Robots: Apple Card discriminates against women
Bad Robot Outcome: Apple Card gives up to 20 times less credit to women
The StoryIn August 2019, Apple released a credit card in partnership with Goldman Sachs. Apple stated that the Apple Card is a “digital first,” numberless credit card “built on simplicity, transparency and privacy.”
Our viewThe data used to train the Goldman Sachs AI will have had gender bias already within the data set. It will have been an historical banking data set likely purchased for the purpose of training the algorithm. Clearly the data set was not rigorously set up, nor cleaned properly, nor statistical tests performed that should have alerted both Apple and Goldman Sachs to the issues of bias and any other skews. The algorithm and the customer facing AI should then have been properly tested post set up and pre-launch. It would have been prudent to not only have had a technical person sign off the algorithm and the AI but also a legal person given the algorithm is blatantly discriminatory.
Who was responsible for acquiring the training data set
Who was responsible for cleaning and testing the data?
What statistical methods were used to ensure the AI would not discriminate? Are these best practice?
Who is on the team managing the algorithm and was there diverse views and lenses who should have picked up this mistake?
If this algorithm produces biased results, what legal implications are there?