Ethical AI Blogs
This blog provides you great info on the principles, tools and examples of AI done ethically.
The inherent societal bias has been a significant concern across the healthcare spectrum, especially in the wake of growing economic and social disparity.
AI has become ubiquitous and pervasive. Its applicability spans all sectors of our daily interactions. Despite the growing implementation of various AI-driven technologies, laws and regulations have in most part, lagged, creating a legislation loophole.
AI surveillance’s ability to monitor and process every frame 24/7, allowing persistent monitoring, gives rise to concerns of a big-brother style mass surveillance and masses’ control.
Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.
Open Loop is a collaborative initiative supported by Facebook to contribute practical insights into policy debates by prototyping and testing approaches to regulation before they are enacted.
When assessing the risk of AI harm, different actors will view this concept through different lenses.
The most common risk frameworks look at risk across two dimensions: impact versus the probability of that impact happening.
Recently, on a tech forum site, a contributor made the following simple, but insightful statement
The purpose of ethics and the law are often distinct yet the EU is on a path to turn ethical principles into legal rules. Is this the right approach?
Direct discrimination occurs when somebody is treated unfavourably because of an attribute such as age, disability, race, sexuality etc
Procedural fairness is concerned with the procedures used by a decision maker, rather than the actual outcome reached.
If you are a parent in Australia and put bowls of ice cream in front of two siblings, the first thing they do is examine the quantity of ice cream in the other’s bowl.
A toolkit can make all the difference when it comes to the application of ethical principles.
Singapore has been a significant contributor to the global discussion on the ethics of AI – recently releasing three documents for trade associations and chambers, professional bodies, and interest groups for discussion, and adaption for their own use.
In recent years numerous companies, governments, NGOs and academic institutions have developed and publicised their AI ethics principles.
Good technological design requires an ethical framework within which the technology can be designed, developed and deployed.
Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased outcomes.
AI systems do not possess an inherent ethical compass with which to understand the consequences of their actions.
Human rights exist to ensure that each one of us is entitled to make free choices about how to live, without discrimination.
The reason technology ethics is growing in prominence is that new technologies give us more power to act.
To me the word ‘ethics’ evokes some trepidation. Would you hire a murderer?
John McCarthy coined the term ‘artificial intelligence’ (AI) in 1956 when he invited a group of researchers from a variety of disciplines