The Growing Privacy Concerns of Facial Recognition
Facial recognition has been a staple of science fiction movies for decades, but only recently has it become an everyday reality—especially for Facebook’s 1.3 billion users, who are likely already acquainted with the social network’s DeepFace system. The system scans through over 400 million newly uploaded images each day, using facial recognition to identify users in photos who are not yet tagged by the original uploader or another user. When DeepFace recognizes a match, it contacts the user and notifies them they are in the picture. The majority of Facebook users have encountered this at one point or another, wondering how exactly Facebook’s facial recognition is so eerily accurate.
Facebook is no stranger to privacy concerns, so with the growing attention their facial recognition system is receiving, it’s not surprising that privacy concerns are being raised. Although Facebook has an option for users to blur their face from a picture to protect their privacy, the feature isn’t widely known—or used—among users.
Still, while Facebook’s usage of facial recognition is potentially alarming, many consider it fairly benign compared to the government’s current and potential uses of facial recognition technology, which raises an additional series of questions regarding how far is too far when it comes to using facial recognition to protecting against threats both international and domestic.
From tech giants to Facebook and Google—who continue to work on gadgets involving facial recognition despite legal concerns—to the government, facial recognition is becoming a core component in future endeavors, playing an increasingly prominent role in discussions regarding privacy as a result. In an age where everyone is online, one cannot risk an inappropriate picture emerging across their social media feed, just because they’re in the background and Facebook detected their face automatically.
So How Does It Work?
Facial recognition on Facebook and elsewhere involves a very intricate process where computers identify a face using a process called deep learning, where massive data sets of multiple faces (including various angles of the same person) help the computer discern between lighter and darker pixels, leading to the detection of features, like eyes and noses, and then clustering the unique pixels of a face into elements that help distinguish contours and edges.
With Facebook’s DeepFace, the development team has proven that the tool is just as accurate as human judgment in regard to distinguishing one face from another. One benchmark published last March found that humans were correct 98% of the time when asked if a photo of two faces belonged to the same person. DeepFace reported an accuracy of 97.35% in the same trial.
DeepFace’s accuracy is partially indebted to its ability to account for a face’s three-dimensional shape; this allows it to provide facial recognition even for photos taken in profile. Perhaps more important is DeepFace’s comprehensive training data: a collection of 4.4 million labeled faces from Facebook’s own archive of users. Users do consent to Facebook’s permission to use their personal data when signing up for Facebook, but nothing is mentioned of DeepFace and its role in storing photos specifically.
EULAs: Giving Your Rights Away with One Click
Facebook’s end-user license agreement (EULA) has drawn the ire of many critics who deem it unfair that usage of personal data is stuffed in between a lot of text, prompting most users to simply click ‘Accept’ without thinking when signing up for the site. This is in contrast to scientists holding studies, who must get a written agreement from those participating as subjects. With Facebook, Erik Learned-Miller, a computer scientist at the University of Massachusetts, Amherst, calls this practice “the antithesis of transparency.” He continues: “No one really knows what they’re getting into.”
Facebook isn’t the only major site that practices this tactic. EULAs and similar walls of text with an ‘I Agree’ button at the end will often grant a website the rights to content that would otherwise be private, whether it’s your personal photos and email address or your browser history. Ideally, it would be great if EULAs were more concise and readable, but for now web surfers should assume the worst when encountered with a EULA on a social media site.
In the wake of the government’s handling of situations like Edward Snowden and WikiLeaks, there will continue to be escalating concerns regarding the government’s ability to use facial recognition as a means to spy on citizens, while using the “national security” excuse in the court of law to turn this sort of domestic espionage—where not even innocent civilians are exempt—into something that is entirely legal.
And that’s a scary thought—just not an original one: