Bad Robots – China Uses Artificial Intelligence to Target Uighur Muslim Population
Bad Robot Outcome: Despite trying to shield this information from the international community, it is a known fact that the Chinese government has been involved in wide scale human rights violations against the Uighur Muslim population of Xinjiang. In the past year, it has further come to light that various Artificial Intelligence technologies are being used to target and suppress the already marginalized group.
The StoryAs I write this article, at least one million Uighur Muslims are being held in what the Chinese government calls “re-education centers.” Despite the not-so-sinister name placed on such “centers,” they are internment camps encapsulated by barbed wire and guarded from watch towers. There are around 11 million Uighurs in Xinjiang – an autonomous region in Northwest China. The area has been under Chinese control since its annexation in 1949. Xinjiang, designated a “special economic zone,” is the country’s largest producer of natural gas.
The Fall-OutThe implications of China’s usage of this type of technology are vast and terrifying. It has already interned over one million Uighur Muslims and continues to repress the group with this overreaching surveillance. To be clear, China is not alone in having gone too far down this slippery slope. In fact, in the United States, there are many examples of “racism built into [its] algorithmic decision making,” as noted by Jennifer Lynch – surveillance litigation director at the Electronic Frontier Foundation. However, as noted above, China is alone in being the first government to explicitly introduce race-based facial recognition technology into its surveillance capabilities, drawing much concern and criticism from the international community – particularly considering the fact that many of the Chinese startups listed above have plans to expand internationally. As stated by Jonathan Frankle, an AI researcher at the Massachusetts Institute of Technology, “I don’t think it’s overblown to treat this as an existential threat to democracy.” Claire Garvie, an associate at the center on Privacy and Technology at Georgetown Law, further warned of the perils of race-based facial recognition. “If you make a technology that can classify people based on ethnicity, someone will use it to repress that ethnicity.” This is exactly what we’re seeing in China. There have been commercial fall-outs, as well. Despite their size, you won’t find a Huawei smartphone in the United States. In July 2020, the UK banned the company from its 5G infrastructure. Additionally, the US Commerce Department blacklisted eight Chinese companies (including Megvii) due to their contribution to human rights violations against Uighur Muslims.
We at the Ethical AI Advisory agree with the viewpoints espoused by Claire Garvie and Jonathan Frankle above. To put it quite simply, facial recognition technology that targets certain ethnicities is ethically problematic to the point of being downright dangerous.
It flies directly in the face of the third (of eight) AI Ethics Guidelines – fairness. Any technological system that singles out an ethnic group has the potential to be unfair. One might argue that there are legitimate purposes, but the risks of misuse (as noted by Claire Garvie) far outweigh any potential positive applications of such technology.
We stand in solidarity with the international community that has opposed the use of facial recognition technology to target and oppress Uighur Muslims.