Improving diversity begins at the university student level. AI was once an interdisciplinary field, however in recent times it has narrowed to become a technical discipline – drawing solely from computer science and engineering disciplines. However, as AI systems are increasingly applied to social domains such as education, healthcare, criminal justice, recruitment and housing, it is critical that university AI programs expand their disciplinary orientation beyond computer science and engineering disciplines.
Expertise is required from the social domains in which AI is rapidly being embedded. This requires a transformation of the field of AI, one which sees social science and the humanities as key contributors. This will better ensure that the development of AI is relevant and beneficial to the social context in which it is deployed.
Without the expertise of those trained to research the social world – AI runs the risk of deploying biased products which are hazardous to humanity. Consider the example of Amazon’s sexist recruitment tool. The AI tool had been trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. Subsequently, the recruitment tool had taught itself that male candidates were preferable. Had the team of machine-learning specialists who built that tool been exposed to interdisciplinary training, studying social science as well as computer science, there is a high chance that the gender bias would’ve been picked up far earlier in the design process, prior to the deployment of the tool.