by Marc Goodman
Proponents of biometric security argue it is inherently more secure because nobody can steal your fingerprints (incorrect, as noted above) and because fingerprints are an immutable physical attribute that can’t be altered by criminals. Turns out, that’s not true either, as the twenty-seven-year-old Chinese national Lin Ring proved in 2009. Lin paid doctors in China $14,600 to change her fingerprints so that she could bypass the biometric sensors used in Japan’s airports by immigration authorities. Lin had been deported previously and was eager to return to Tokyo, something that would not have been possible if she provided her actual fingerprints upon arrival at Narita International Airport. In order to sneak back in, she paid Chinese surgeons to swap the fingerprints from her right and left hands, having her finger pads regrafted onto the opposite hands. The ploy worked, and she was successfully admitted. It was only weeks later, when she attempted to marry a fifty-five-year-old Japanese man, that authorities noticed the odd scarring on her fingertips. Japanese police report that doctors in China have created a thriving business in biometric surgery and that Lin was the ninth person they had arrested that year for surgically medicated biometric fraud.
Needless to say, such draconian measures would not be necessary if hackers could merely intercept the fingerprint data from the IoT-enabled biometric scanner as it was sent to the computer server for processing—something the security researcher Matt Lewis has already demonstrated at the Black Hat hacker conference in Europe. Lewis created the first-ever Biologger, the equivalent of a malware keystroke logger, which, rather than capturing all the keystrokes somebody innocently typed on her computer, could effectively steal all the fingerprint scans processed on an infected scanner. Lewis demonstrated that his biologging device allowed him to analyze and reuse the data he had captured to undermine biometric systems, granting access to supposedly “secure buildings.” While it is tempting to believe that biometric authentication is inherently more impenetrable than legacy password systems, the assumption only holds true if the new systems are actually implemented in a more secure fashion. Otherwise, it’s just old wine in a new bottle.
Your Password? It’s Written All Over Your Face
In the sci-fi film Minority Report, Tom Cruise plays a Washington, D.C., police officer in the year 2054. In one scene, as Cruise’s character, John Anderton, strolls through the local shopping mall, his face is recognized by the interactive billboards that greet the detective by name, serving up ads based on his prior purchase history. It seems 2054 has arrived sooner than expected. For just as fingerprints can uniquely identify an individual, so too can face prints, biometric scans of the features on your face, such as the distances between your eyes, nose, ears, and lips. Not only can these biometric characteristics reveal your personal identity, but they also allow others to profile you by gender, age, race, and ethnicity. All these data are like manna from heaven for marketers eager to re-create Mr. Anderton’s targeted-advertising experience.
Back in today’s world, billboards in Japan are already peering back at passersby, comparing their facial features in real time with an NEC database of more than ten thousand pre-identified patterns to accurately sort individuals into various consumer profile categories and change the ad messages displayed in real time based on demographic assessments. Beyond advertising, there are numerous other uses for facial-recognition technology. FaceFirst, a California biometrics firm, allows retailers to scan the faces of all customers in their stores to identify known shoplifters. If one is detected, the software immediately sends e-mails and text messages to all store personnel with a photograph of the suspected thief so employees can take “appropriate action.” A similar system employed by Hilton Hotels uses facial recognition to scan the faces of all guests, allowing employees to greet visitors by name, especially VIP Gold card members.
It’s not just advertisers who are getting access to facial-recognition data, so too are other consumers. Many of us might have noticed a security camera when walking into our neighborhood bar and innocently presumed the device was there in case the place got robbed. But when an old-school camera is linked to the IoT and big-data analytics, a new powerful smart sensor is born. In 2012, a company in Austin, Texas, partnered with local pubs and nightspots to take those “dumb” video feeds and do real-time facial analytics on all customers in their bars. The result is an app called SceneTap, which enables those looking for a good time in Austin to pull up live statistics on each establishment and see which nightspots are full, gender mix, and how old or young the crowd is. For example, the app’s dashboard may show that the Main Street Bar & Grill is 47 percent full; 68 percent women with a mean age of twenty-nine and 32 percent men, average twenty-six. The very useful app takes the guesswork out of barhopping and allows drunk frat boys to successfully avoid “sausage fests,” selecting only those bars with the greatest number of young women in them. Might future in-app purchases allow users to obtain further demographic data such as a bar patron’s height, weight, and ethnicity? They might, as there are no laws or regulations in the United States protecting Americans from invasive biometric technologies or governing the limitations of their use.
Facial-recognition technologies have improved their match rates significantly and can now approach 98 percent accuracy, a 20 percent improvement from 2004 to 2014. All the major Internet companies, including Apple and Google, have made substantial investments in facial biometrics, but none known to be quite as large as that of Facebook, which acquired the Israeli biometrics start-up Face.com in 2012 for nearly $100 million. Facebook has long been performing facial recognition on every photograph you have ever uploaded (something you agreed to in its ninety-three-hundred-word ToS). The acquisition of Face.com allowed Facebook to greatly improve its “Tag Suggestions” feature, identifying all the people in the pictures you post using biometric algorithms and by encouraging you to tag your friends, thereby confirming their biometric identities for Zuckerberg. Facebook’s automatic facial-recognition technologies have not been without controversy given the obvious privacy implications, and regulators across the EU have banned the feature. In the meantime, back in the United States, there is no legislation whatsoever that prohibits running facial-recognition software against Facebook’s inventory of product, which as noted previously is you. More than a quarter of a trillion photographs have been uploaded since Facebook was founded, which means Facebook—not India’s Aadhaar program—is the single largest repository of biometric data on earth, vastly exceeding that held by any government in the world.
You can expect increasing pressure from Wall Street to monetize biometric data, and there will be no shortage of potential customers, including the government. In his series of revelations, the NSA leaker Edward Snowden alleged that his agency had directly tapped into the servers of nine of the largest Internet companies, including Facebook, potentially giving the intelligence community access to the company’s biometric gold mine. In a separate disclosure, Snowden divulged that the NSA was already sucking down millions of additional photographs posted online daily and was capable of processing at least fifty-five thousand images a day of “facial recognition quality.” What might police and government security agencies do with this biometric data? In democratic societies, the hope is that it might be used to capture violent criminals and terrorists. But once such a biometric dragnet is built, its intended uses are controlled by those in power. In a classified world with little oversight, abuses are bound to occur, and in the hands of tyrants and dictators the tools become the foundation of a Stasi-like Orwellian dystopia.
Police forces in the U.K. have become among the first to implement a widespread automated facial-recognition program using NEC’s Neo-Face technology to match faces from any crime scene photograph or video against a database of images. When facial recognition is combined with the highest density of CCTV cameras of any country in the world, the constantly recording body-worn cameras on police officers, and the CSI-style smart-phone apps capable of performing both facial and fi
ngerprint recognition in the field, it seems as if the days of Minority Report criminal tracking have already arrived. So how far has facial-recognition technology progressed? Far enough to match your face to your Facebook profile within sixty seconds as you walk down the street and to your Social Security number sixty seconds later. The program that makes this possible is called PittPatt and began as a research project at Carnegie Mellon University in the wake of 9/11 with millions of dollars in funding provided by DARPA.
As more police forces use CCTV to monitor groups of individuals walking on downtown streets, in a football stadium, or at the airport, software such as PittPatt can run in the background in real time to identify every face as it passes, neatly placing a cartoonlike bubble above each individual’s head with a Web link to more information. A simple click on the data bubble can show a person’s Facebook profile, Social Security number, credit history, and past photographs of him posted online, whether it’s a picture of the family trip to Disneyland, him holding a martini at the office Christmas party, or his dating profile on Match.com. While some might support law enforcement having access to these advanced image-recognition technologies for the purposes of public safety, they might feel different if such potent surveillance capabilities were in the hands of the private sector. Too late.
In mid-2011, Google acquired PittPatt, opening the door for the search giant to implement the formidable facial-recognition technology across the suite of its products, including YouTube, Picasa, Google+, and Android. Perhaps the most obvious candidate that could benefit from embedded facial-recognition technology is Google Glass. Using the tool, it would be possible to immediately identify that hot girl or guy at the party, and you’d never have to worry about forgetting whatshisname from the accounting department ever again. Concerned about a potential backlash, Google has banned the use of facial-recognition Google Glass apps for the time being, but with the acquisition of PittPatt the technical capability fully exists. Of course hackers have jailbroken their Glass devices and created a number of facial-recognition apps, including the popular NameTag app.
NameTag allows users to scan the faces of those before them and compare them with millions of publicly available online records, returning the person’s name and social media profiles, including those on Facebook, Twitter, and Instagram, and other relevant identifying details. Such facial-recognition apps are not unique to Google Glass and can just as readily be used with the camera on your smart phone. Just like in Minority Report, we are all now living in the age of facial recognition. As a consequence, nobody is just a face in the crowd anymore. Indeed, your face is now an open book, one that can be read at a glance by anybody else—including the government.
The FBI’s billion-dollar Next Generation Identification (NGI) system has as many as 52 million searchable facial images, including 4.3 million images of noncriminal background check applicants. NGI also contains 100 million individual fingerprint records, as well as millions of palm prints, DNA samples, and iris scans. Not only can the system scan mug shots for a match, but NGI can also track suspects by picking out their faces in a crowd from standard security cameras or compare them with photographs publicly uploaded on the Internet. Of course no biometric technology is foolproof, and the issue of false positives, being identified with a criminal based on a biometric match, when in fact no such match existed, can have profound consequences for the innocent, as we’ve seen previously with the proliferation of terrorism watch and no-fly lists.
Though facial recognition may sound like an anticrime and security panacea, it is not without its problems. Just as fingerprint sensors can be hacked, so too can face-printing systems increasingly be used to unlock your phone or computer or to gain access to your office. All it takes to defeat some systems, such as those on Lenovo laptops or smart-phone password apps such as FastAccess Anywhere, is to hold up a photograph of the person you wish to impersonate. The same technique has also worked with iris scanning, allowing hackers to reverse engineer the biometric information stored in a secure database and use it to print a photographic iris good enough to fool most commercial eye scanners.
Another challenge with facial-recognition algorithms is that even the very best systems are “only” approaching 98–99 percent accuracy. Though the error rate seems small, those errors can add up. Consider a facial-recognition system linked to a terrorist watch list installed at Chicago’s O’Hare International Airport. With 50 million passengers transiting annually, a 1 percent false-positive rate means 500,000 travelers a year (more than 1,300 a day) could be erroneously detained or arrested because of computer error. The problem will almost certainly be compounded by human errors in data entry, as we have seen with the existing security watch lists, leading cameras to correctly match an individual’s face to a mistyped name in a wanted-persons database.
The consequences of incorrect biometric identification could even prove fatal. The U.S. Department of Defense has begun implementing biometric targeting and recognition capabilities in its drone fleet. The defense contractor Progeny Systems Corporation, working in conjunction with the army, has already developed a drone-mounted “Long Range, Non-cooperative, Biometric Tagging, Tracking, and Location” system that allows an unmanned aerial vehicle (UAV) to positively identify a human target using biometrics prior to detonating ordnance on the target’s head. In such a case, a false-positive biometric identification would be disastrous. The future of warfare is autonomous, with drones hunting, identifying, and killing an enemy based on calculations made by software, not on decisions made by human beings.
Given the growing ubiquity of cameras and facial-recognition software, we can fully expect criminals to adopt these tools and use them to their full advantage. Pedophiles could use biometrics to identify that kid at the park playground they like. For terrorists, such as the Mumbai attackers from Lashkar-e-Taiba, a facial-recognition app on their mobile phones would have allowed them to identify the bank president K. R. Ramamoorthy without having to play guessing games with their terrorist command center back in Pakistan as to his identity.
Even Crime, Inc. has begun to explore facial-recognition technologies, according to the commissioner of the Australian Federal Police (AFP). At the graduation ceremony of hundreds of new police recruits from the AFP in 2011, officials spotted a man who stood out from the crowd of families watching their loved ones receive their police badges. The individual had a professional camera with a telephoto lens and appeared to be snapping face pics of all the graduates. Upon his detention and questioning, officials learned the shutterbug was a member of an organized crime outlaw motorcycle gang. He was apparently working on behalf of Crime, Inc. to build a photographic facial-recognition database so that fellow gangsters would be able to identify any officer who attempted an undercover investigation against their organization in the future. Biometric tools will have profound implications not just for undercover cops but for witness relocation programs as well. Anybody who has had a prior life that he wishes to conceal for personal or professional reasons may find it impossible moving forward, and it is not just your physical attributes that may betray you—so might your imperceptible behaviors.
On Your Best Behavior
A lot of jobs today are being automated; what happens when you extend that concept to very important areas of society like law enforcement? What happens if you start controlling the behavior of criminals or people in general with software-running machines? Those questions, they look like they’re sci-fi but they’re not.
JOSÉ PADILHA, BRAZILIAN FILM DIRECTOR
When most people think of biometrics, they commonly focus on the measurement of anatomical traits such as our fingers, faces, hands, or eyes. But there is another category of biometrics known as behavioral biometrics, or behaviometrics, which measures the ways in which we and our bodies individually perform or behave, traits that can be just as revealing as human fingerprints. Our keyboard typing rhythms, voices, gait and walking patterns, brain waves, and heartbeats can be quan
tified in ways that provide unique signatures that singly identify us. Just as anatomical biometrics are being used more frequently for security, identification, and access control, so too will the growing field of behaviometrics; in fact, it is already happening.
Voice biometrics are already being used by companies and call centers around the world to uniquely voice print customers. That recording you hear when on hold telling you that “your call may be recorded for quality assurance purposes” fails to disclose the fact that one of the ways companies are measuring call satisfaction is by the tone, tenor, and vocabulary you use during your call. Moreover, in an effort to fight fraud, companies are building vast recorded-voice databases of consumers, generating unique voice prints that can be used in future calls to ensure the person on the line matches the original biometric vocal print taken. If the voices don’t match, callers are asked further verification questions in a process that is completely non-transparent to the general public.
DARPA is working to develop “active authentication” techniques focused on a user’s cognitive processes, personal habits, and patterns we all have for doing things, which in combination can uniquely identify us. One such field of behaviometrics is known as keystroke dynamics—measuring the variances with which we each type individual characters on a keyboard. Minute differences between when each key is hit, in what sequence, with what force, and even how we cut and paste can serve as our online fingerprints to the world. Companies such as the online education platform Coursera use keystroke recognition to ensure the same student “attends” each virtual class before issuing a certificate of completion.