Ethical AI Blog
Here we bring to you the stories of ethics, artificial intelligence and what happens when robots go bad, are unethical or work against humanity…
Experts have questioned the Tinder algorithm concerning its operation, and ethical principles since reports of individual complaints about the algorithm are somewhat biased.
Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.
If you drive for Uber, you can be terminated by an algorithm. Four former drivers who faced such dismissals have brought suit against the ride-sharing behemoth.
Open Loop is a collaborative initiative supported by Facebook to contribute practical insights into policy debates by prototyping and testing approaches to regulation before they are enacted.
The Chinese government has been involved in wide scale human rights violations against the Uighur Muslim population of Xinjiang.
What happens when the algorithmic tools they elect to use end up reinforcing, or even worsening, longstanding and problematic biases toward BIPOC?
When assessing the risk of AI harm, different actors will view this concept through different lenses.
These “filter bubbles” are widening the gaps between us and even creating dangerous political instability.
The most common risk frameworks look at risk across two dimensions: impact versus the probability of that impact happening.
Bad Robots: Secretive Facial Recognition Software Company Challenged in Court by Civil Liberties Watchdog
Your face is likely somewhere in a database of nearly three billion images that have been scraped from millions of websites.
Two of China’s most popular food delivery services came under fire after the wide scale publicization of the perils to their workers.
Recently, on a tech forum site, a contributor made the following simple, but insightful statement