Facial Recognition Technology
By: Max • Research Paper • 2,240 Words • March 19, 2010 • 1,085 Views
Facial Recognition Technology
FACIAL RECOGNITION TECHNOLOGY
Introduction
The September 11th attacks have brought an increased urgency to upgrade the nations security. This crisis has hastened the interest in deployment of biometric systems. Biometric systems identify people by unique physical characteristics including iris patterns, retinal scans, fingerprints and facial structure.
There are two classifications of Automated Biometric Recognition Technologies, Physiological Biometrics and Behavioral Biometrics. Those technologies classified as Physiological Biometrics include fingerprinting, hand geometry, iris recognition, retina recognition and facial recognition. Behavioral Biometrics include signature, voice and keystroke dynamics technologies. This study has been narrowed to be focused on Facial Recognition Technology and applications in the aviation environment.
Facial recognition is a technology that has been around for a good period of time. Scientists at universities have been working on facial recognition for over a decade. Financial support stems from the U.S. Defense Department attempting to find a technology that could spot criminals at border crossings. Commercialization of facial recognition software began in the mid 1990s.
Of all biometric technologies Facial Recognition Technology has received the most public acceptance due to the inherent human utilization of recognition by facial characteristics.
Technology
Basic Facial Recognition
In January 2001, the Massachusetts Institute of Technology’s (MIT) publication, Technology Review, placed biometrics on its top ten list of emerging technologies. It believes that the technology would soon have a profound impact on how people live, work and travel.
Coincidentally, an analysis by an internal International Civil Aviation Organization (ICAO) task force, the New Technologies Working Group (NTWG), has identified facial recognition technology as the most likely biometric to be selected for global use.
As with many aspects of intelligence, the process in which humans recognize others is still poorly understood. This complicates the method of instructing a computer to distinguish a human face and analyze its required measurements. Unlike DNA or other biometric features, collected facial data is not unique to each person. This information is used to narrow searches within acquired databases.
Facial recognition is accomplished in four steps:
1. Sample capture
2. Feature extraction
3. Template comparison
4. Matching
Sample capture involves a short time in which several pictures are taken of one’s face. Ideally this method will increase the potential for acquiring a photo capable of being matched.
Feature extraction involves the process in which distinctive features as required by the four types of systems discussed below are extracted to a template.
Template comparison occurs as the newly created template is compared to a database of known templates for matching characteristics.
Matching is the final result of facial recognition with templates exhibiting the same characteristics as the scanned individual. These matches are typically displaced in order of similar characteristics with the most similar appearing first.
Types of Systems
Eigenfaces
To initially spot a face in a crowd, vector type facial recognition systems depend on the fact that all human faces have the same primary features, a nose, two eyes, mouth, etc. Those features in turn may be described in relation to each other. The software compares faces to 128 archetypes it has on record. A face and its digital image are assigned numbers describing its unique relationships, are stored within a database. An unknown face can then be analyzed for certain numbers and compared against the stored numbers of known faces, such as police mug shots.
Feature Analysis
Feature analysis is the most widely utilized facial recognition technology. It is closely related to eigenfaces, but is capable of accounting for slight facial changes such as smiling verses frowning. Small portions of 2-D images or “building blocks” are used to summarize different regions of the face. This technology also takes into account the relative location of those blocks and anticipates that movement of