Ethical AI Blog
Here we bring to you the stories of ethics, artificial intelligence and what happens when robots go bad, are unethical or work against humanity…
The inherent societal bias has been a significant concern across the healthcare spectrum, especially in the wake of growing economic and social disparity.
The growth of the social media ecosystem and the continuous advancement of AI tools available to disseminate information have increased the tenacity and reach of disinformation attacks.
AI has become ubiquitous and pervasive. Its applicability spans all sectors of our daily interactions. Despite the growing implementation of various AI-driven technologies, laws and regulations have in most part, lagged, creating a legislation loophole.
The use of facial recognition is not all itself bad, but the implications on minority groups are of major concern.
AI surveillance’s ability to monitor and process every frame 24/7, allowing persistent monitoring, gives rise to concerns of a big-brother style mass surveillance and masses’ control.
Experts have questioned the Tinder algorithm concerning its operation, and ethical principles since reports of individual complaints about the algorithm are somewhat biased.
Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.
If you drive for Uber, you can be terminated by an algorithm. Four former drivers who faced such dismissals have brought suit against the ride-sharing behemoth.
Open Loop is a collaborative initiative supported by Facebook to contribute practical insights into policy debates by prototyping and testing approaches to regulation before they are enacted.
The Chinese government has been involved in wide scale human rights violations against the Uighur Muslim population of Xinjiang.
What happens when the algorithmic tools they elect to use end up reinforcing, or even worsening, longstanding and problematic biases toward BIPOC?
When assessing the risk of AI harm, different actors will view this concept through different lenses.