Skip to content

Problems for Amazon: Artificial Intelligence investigators are on their way

Researchers have cited Amazon before by Rekognition, the facial recognition technology that the company has allowed to use the police and that has generated concern because the company is essentially supporting the surveillance state. Technology has also been under scrutiny in the past for a variety of other reasons, such as the fact that the system could be flawed in a way that mistakenly identifies minorities, for example.

An open letter to Amazon

Now, leading researchers in artificial intelligence across the academic and technological spectrum, including those of Amazon rivals such as Google, Microsoft and Facebook, have published an open letter through Medium that reprimands Amazon for selling the technology to the company. police. And, of course, the letter asks the company to stop.

Citing a statement by Amazon vice president Michael Punke, noting that the company supports legislation that helps ensure that its products are not used to infringe civil liberties, the letter continues “asking Amazon to stop selling Rekognition to the police as such legislation “. and the guarantees are not in place”.

A research that has been going on

The letter seems to have been triggered in part by Amazon’s reaction to the research of Joy Buolamwini of the Massachusetts Institute of Technology. Their tests found that software from companies like Amazon, including the one made available to the police, would give higher error rates when trying to detect the gender of dark-skinned women compared to men with lighter skin. According to an Associated Press report, she included in her research software from Microsoft and IBM, which sought to solve the problems she identified.

However, Amazon “responded by criticizing its research methods.” From the open letter of AI researchers:

Currently there are no laws in force to audit the use of Rekognition, Amazon has not revealed who the customers are, nor what are the error rates in different cross-sectional demographic characteristics. How then can we make sure that this tool is not misused as it claims (GM from Amazon Web Services for deep learning and AI Matthew Wood)?

On what can be trusted, the letter continues, are audits conducted by independent researchers such as Buolamwini “with concrete numbers and experimentation clearly designed, explained and presented, which demonstrates the types of biases that exist in these products. “This critical work rightly raises the alarm about the use of such immature technologies in high-risk scenarios without public debate and legislation to ensure that civil rights are not violated”.


Also published on Medium.

Published inArtificial Intelligence (AI)
%d bloggers like this: