Ethical AI Blog
Here we bring to you the stories of ethics, artificial intelligence and what happens when robots go bad, are unethical or work against humanity…
This blog provides you great info on the principles, tools and examples of AI done ethically.
Procedural fairness is concerned with the procedures used by a decision maker, rather than the actual outcome reached.
If you are a parent in Australia and put bowls of ice cream in front of two siblings, the first thing they do is examine the quantity of ice cream in the other’s bowl.
A toolkit can make all the difference when it comes to the application of ethical principles.
Singapore has been a significant contributor to the global discussion on the ethics of AI – recently releasing three documents for trade associations and chambers, professional bodies, and interest groups for discussion, and adaption for their own use.
In recent years numerous companies, governments, NGOs and academic institutions have developed and publicised their AI ethics principles.
Good technological design requires an ethical framework within which the technology can be designed, developed and deployed.
Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased outcomes.
AI systems do not possess an inherent ethical compass with which to understand the consequences of their actions.
Human rights exist to ensure that each one of us is entitled to make free choices about how to live, without discrimination.
The reason technology ethics is growing in prominence is that new technologies give us more power to act.
This blog highlights the times when bots go bad, are designed unethically, make biased decisions, discriminate against people, affect the environment, try to hurt people or are just not good for humanity.
After being sued by two groups, the United Kingdom’s Home Office has agreed to halt its use of, and substantially redesign, an algorithm that it had been using to analyze and support visa applications.
“Deepfakes” – AI generated fake images, videos, and audio files – are becoming more commonplace as their proliferation across the internet explodes.
The algorithm failed to account for more than half of Black patients who should have been categorised as “high risk.”
Bad Robots: Global Exam-Grading Software In Trouble For Algorithm Bias International Baccalaureate Program’s Exam-Grading Algorithm May Have Adversely Impacted Test Scores of Low-Income & Minority Students
Amazon’s AI-enabled recruitment software tool “downgraded” resumes of job seekers that contained the word “women” or that otherwise implied the applicant was a woman.
Apple Card gives up to 20 times less credit to women. Apple Card is a “digital first,” numberless credit card “built on simplicity, transparency and privacy.”
Microsoft recently put plans in place to replace its human journalists at MSN.com with robots.
Algorithms determine that ‘black’ offenders twice as likely to reoffend as ‘white’ offenders
Bad Robot Outcome: AI determines a heart attack for men = panic attack for women
Ethical AI Whitepapers
Richard Vidgen, Professor of Business Analytics at UNSW presents a Whitepaper on a Business Canvas for Ethical AI. Using five key dimensions of Utilitarian, Rights, Justice, Common Good and Virtue a new model for business ethics is created.