News

Why Amazon’s Facial Recognition System Should Worry You

Why Amazon’s Facial Recognition System Should Worry You

Why Amazon’s Facial Recognition System Should Worry You

At first glance, Amazon’s little known Rekognition software seems just like another example of how far our technological prowess has come. The latest demonstration of the potential offered by deep-learning AI, Rekognition can analyze images and identify objects and faces within them.

The first that many of us became aware of the existence of Rekognition was when it was used to provide facial recognition during the recent royal wedding in Britain. As a technical demonstration, this was an impressive and successful display, even if the application was a little underwhelming.

The recognition was first introduced as part of Amazon’s Web Services in November 2016. It has since been used for a variety of applications. As well as being able to recognize faces, the system has also been employed by Pinterest for object recognition.

However, Amazon has drawn criticism from many organizations, including the ACLU, for its decision to sell Rekognition directly to law enforcement agencies. As a result, Amazon is now complicit in warrantless mass-surveillance of the American public. This is a state of affairs that should concern us all. It is emblematic of the disproportionate power that private enterprises in the United States hold over our politics and everyday lives.

Amazon Rekognition

Real-Time Analysis

Facial recognition technology has been around for some time, and few people would argue against its use by law enforcement. For example, if law enforcement had access to the security footage of a suspect, being able to match the face with any individuals currently in their databases would make sense. What makes technologies like Rekognition different, however, is that they analyze data in real-time.

When law enforcement uses facial recognition technology to match a suspect’s face with a name, they are only attempting to identify specific individuals already in their database. Crucially, these people have already been identified as likely being involved in a crime. Not so with Rekognition.

Rekognition can be fed data from a variety of sources. In the city of Orlando, Florida, one of the two locations where the technology is being trialed by law enforcement, it is using a data stream from the city’s many security cameras. The system is able to analyze every face that it detects in real-time. Therefore, anyone who is seen by those cameras is subjected to a facial recognition analysis.

Unreliable Results

No technology is perfect. While Rekognition is among the most advanced facial recognition systems in the world, it is not 100% reliable. Unsurprisingly, the exact data about the system’s reliability and capabilities are hard to come by. However, a recent UK study looking at several modern facial recognition systems discovered that some returned mistaken identities 98% of the time!

This means that not only might the American public soon be subjected to constant corporate-sponsored surveillance, but citizens might also end up being routinely misidentified. This may lead to numerous incidents of innocent Americans being arbitrarily detained and investigated for crimes on the basis of unproven technology.

Despite strong indications that this technology is not nearly as reliable as we often assume, governments across the world are rushing to equip themselves with facial recognition systems. One of the pioneers being China. The Asian economic giant has implemented facial recognition into its public surveillance systems in an effort to reduce and fight crime. Although, China has long been known as a heavily censored country considering the bizarre ban of Hip- Hop culture or the use of VPN’s which do not provide backdoors to the government for surveillance purposes.

Lack of Accountability

Even if we assumed that Rekognition is able to reliably and accurately identify faces in real-time, there are still a number of important ethical issues. Chief among these is the lack of accountability. It is the nature of deep learning that, while we can teach an AI how to analyze faces, we aren’t then able to look at how it does it.

Therefore, if the system were trained to target specific groups of people – deliberately or not – it might not be obvious.

The current political situation in the United States is tense, and there is a potential for further antagonism between the government and minority and activist groups. Therefore, the public should be vigilant of any technology that might be used to aid law enforcement in targeting specific groups.

Following the Facebook – Cambridge Analytica scandal, there is a much greater public awareness of the dangers of unrestrained corporate power. Amazon’s smooth transition into the business of mass-surveillance is a very worrying development and not one that we should allow to pass unchallenged.

Related posts

Leave a Reply

Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WordPress Theme built by Shufflehound. © 2023 All rights reserved by DoryLabs