Bad Robot Blogs
This blog highlights the times when bots go bad, are designed unethically, make biased decisions, discriminate against people, affect the environment, try to hurt people or are just not good for humanity.
The growth of the social media ecosystem and the continuous advancement of AI tools available to disseminate information have increased the tenacity and reach of disinformation attacks.
The use of facial recognition is not all itself bad, but the implications on minority groups are of major concern.
Experts have questioned the Tinder algorithm concerning its operation, and ethical principles since reports of individual complaints about the algorithm are somewhat biased.
If you drive for Uber, you can be terminated by an algorithm. Four former drivers who faced such dismissals have brought suit against the ride-sharing behemoth.
The Chinese government has been involved in wide scale human rights violations against the Uighur Muslim population of Xinjiang.
What happens when the algorithmic tools they elect to use end up reinforcing, or even worsening, longstanding and problematic biases toward BIPOC?
These “filter bubbles” are widening the gaps between us and even creating dangerous political instability.
Bad Robots: Secretive Facial Recognition Software Company Challenged in Court by Civil Liberties Watchdog
Your face is likely somewhere in a database of nearly three billion images that have been scraped from millions of websites.
Two of China’s most popular food delivery services came under fire after the wide scale publicization of the perils to their workers.
Twitter is re-evaluating an image cropping algorithm after evidence has emerged that the technology seemingly favored images of white individuals while hiding those of people of color.
After being sued by two groups, the United Kingdom’s Home Office has agreed to halt its use of, and substantially redesign, an algorithm that it had been using to analyze and support visa applications.
“Deepfakes” – AI generated fake images, videos, and audio files – are becoming more commonplace as their proliferation across the internet explodes.
The algorithm failed to account for more than half of Black patients who should have been categorised as “high risk.”
Bad Robots: Global Exam-Grading Software In Trouble For Algorithm Bias International Baccalaureate Program’s Exam-Grading Algorithm May Have Adversely Impacted Test Scores of Low-Income & Minority Students
Amazon’s AI-enabled recruitment software tool “downgraded” resumes of job seekers that contained the word “women” or that otherwise implied the applicant was a woman.
Apple Card gives up to 20 times less credit to women. Apple Card is a “digital first,” numberless credit card “built on simplicity, transparency and privacy.”
Microsoft recently put plans in place to replace its human journalists at MSN.com with robots.
Algorithms determine that ‘black’ offenders twice as likely to reoffend as ‘white’ offenders
Bad Robot Outcome: AI determines a heart attack for men = panic attack for women