The inherent societal bias has been a significant concern across the healthcare spectrum, especially in the wake of growing economic and social disparity.
Ethical AI Blog
AI has become ubiquitous and pervasive. Its applicability spans all sectors of our daily interactions. Despite the growing implementation of various AI-driven technologies, laws and regulations have in most part, lagged, creating a legislation loophole.
AI surveillance’s ability to monitor and process every frame 24/7, allowing persistent monitoring, gives rise to concerns of a big-brother style mass surveillance and masses’ control.
Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.
Open Loop is a collaborative initiative supported by Facebook to contribute practical insights into policy debates by prototyping and testing approaches to regulation before they are enacted.
When assessing the risk of AI harm, different actors will view this concept through different lenses.
The most common risk frameworks look at risk across two dimensions: impact versus the probability of that impact happening.
Recently, on a tech forum site, a contributor made the following simple, but insightful statement
The purpose of ethics and the law are often distinct yet the EU is on a path to turn ethical principles into legal rules. Is this the right approach?
Direct discrimination occurs when somebody is treated unfavourably because of an attribute such as age, disability, race, sexuality etc