Home About Us Laboratory Services Forensic Science Communications Back Issues January 2001 Computer-Assisted Facial Image Identification System...
Info
This is archived material from the Federal Bureau of Investigation (FBI) website. It may contain outdated information and links may no longer function.

Computer-Assisted Facial Image Identification System by Yoshino, Matsuda, Kubota, Imaizumi, and Miyasaka (Forensic Science Communications, January 2001)

Computer-Assisted Facial Image Identification System by Yoshino, Matsuda, Kubota, Imaizumi, and Miyasaka (Forensic Science Communications, January 2001)
fsc_logo_top.jpg
fsc_logo_left.jpg

January 2001 - Volume 3 - Number 1

Research and Technology

Computer-Assisted Facial Image Identification System

Paper presented at the 9th Biennial Meeting of the International Association
for Craniofacial Identification, FBI, Washington, DC, July 24, 2000

Mineo Yoshino
Section Chief

Hideaki Matsuda, Satoshi Kubota, and Kazuhiko Imaizumi
Research Biologists

Sachio Miyasaka
Senior Scientist

First Medico-Legal Section
National Research Institute of Police Science
Chiba, Japan

Introduction | Equipment and Operation Method
Experimental Study | Results | Discussion | References

Introduction

Facial image identification is becoming an important theme in forensic anthropology because surveillance cameras are used as a silent witness in crime scenes such as convenience stores, banks, and parking garages. Facial image identification is generally approached in three ways: morphological comparison of facial features, anthropometrical analysis, and face-to-face superimposition (Iscan 1993).

In order to assess two facial images, the video superimposition technique has been applied to facial image comparison (Kubota et al. 1997; Maples and Austin 1992; Vanezis and Brierley 1996; Yoshino et al. 1996). Maples and Austin (1992) reported the video superimposition technique was useful in cases when it is possible for laboratory personnel to photograph a suspect at the correct position relative to the camera.

Vanezis and Brierley (1996) applied the video superimposition technique to identify the facial image of suspects in 46 criminal cases. They stated that direct comparisons could be made in 36 cases, including 20 major viewpoint discrepancy cases.

As described previously, the comparison of facial images taken with a surveillance camera and mug shots of suspects often is a difficult task because surveillance cameras usually look down upon the scene, whereas mug shots are frontal and lateral or oblique images. To solve this problem, Vanezis and Brierley (1996) developed a face-to-face video superimposition system using 3D physiognomic analysis. This system was a useful tool for facial image identification because the video superimposition of two facial images could be performed under the same facial orientation.

Facial images can play a useful role in the identification of criminals (Kubota et al. 1997; Linney and Coombes 1998; Proesmans and Van Gool 1998; Yoshino et al. 1996). Despite this advantage, several problems such as operation time and anthropometrical analysis arose in the old system.

With these problems in mind, the authors (2000) built a new computer-assisted facial image identification system using a 3D physiognomic range finder. The new system enabled morphological comparison, anthropometrical analysis, and reciprocal points matching under the face-to-face superimposition images. This article focuses on the reliability of the facial image comparison with the computer-assisted facial image identification system.


Equipment and Operation Method

This system consists of a 3D physiognomic range finder and a computer-assisted facial image superimposition unit (Figure 1). The 3D range finder is composed of a detector for measuring facial surface and its control computer. The detector has two sinusoidal grating projection devices with the phase shift and two charge coupled device (CCD) cameras positioned at the left- and right-hand sides of the apparatus. The computer-assisted facial image superimposition unit consists of a host computer including proprietary software, a flat surface color display, and a color image scanner for inputting a 2D facial image of the criminal. The details of the specification of each instrument have been described elsewhere (Yoshino et al. 2000).

The 3D morphology of the face is obtained by using the range finder based on the sinusoidal grating projection with the phase shift method at 2.5 seconds, with accuracy of the order of 0.16 mm. The 3D facial image data is stored in the magneto-optical (MO) disk (about 6 MB per person) or directly transferred to the facial image superimposition unit through the network.

To make the comparison between the 3D facial image of a suspect and the 2D facial image taken at the scene of a crime, the 3D facial image is first reproduced on the display of the host computer from the MO disk. Then the 2D facial image is taken with the color image scanner and stored within the computer (Figure 2, A and B).

The scaling of the facial image is performed by converting the original 3D measurement data into the number of pixels on the display. In this system, the perspective distortion of the 3D facial image is electronically corrected by taking into account the distance between the face and camera at the crime scene. For the superimposition of the 3D and 2D facial images, the 3D facial image is adjusted exactly to match the orientation and size of the 2D facial image under the fine framework mode.

After the determination of the orientation and size of both images, the fine framework mode of 3D facial image is converted to the fine texture image (Figure 2C). The shape and positional relationships of facial components between the 3D and 2D facial images are examined by the fade-out or wipe image mode (Figure 3).

In this system, 18 points were plotted on the 3D and 2D facial images for evaluating the anthropometrical data and the reciprocal points matching between both images (Figure 4, A and B). The distance between the two selected points and the angle among the three selected points on the 3D and 2D facial images are automatically measured and shown in a column on the right side of the display (Table 1).

Black-and-white photo demonstrating a facial image identification system. An individual is seated in front of a 3D physiognomic range finder. The photo shows the control computer, the host computer for superimposition, the flat surface color display, and the color image scanner.


Figure 1. Facial image identification system.
A = 3D physiognomic range finder
B = control computer
C = host computer for superimposition
D = flat surface color display
E = color image scanner
Click image for larger version.



Series of three photos comparing 3D and 2D facial images. First photo shows a frontal fine texture image reproduced from the 3D physiognomic data. Second photos shows a 2D facial image taken with the color image scanner. Third photo shows a 3D facial image adjusted to the orientation and size of the image in the second photo.


Figure 2. Comparison between the 3D and 2D facial images.
A = frontal fine texture image reproduced from the 3D physiognomic data
B = 2D facial image taken with the color image scanner
C = 3D facial image adjusted to the orientation and size of B
Click image for larger version.



Set of two photos showing face-to-face superimposition of the 3D and 2D facial images. First photo shows a vertical wipe image, and the second photo shows a horizontal wipe image.


Figure 3. Face-to-face superimposition of the 3D and 2D facial images.
A = vertical wipe image
B = horizontal wipe image
Click image for larger version.



Series of three photos that demonstrate the plotting of anthropometrical points on 3D (first photo) and 2D (second photo) facial images. The third photo is a superimposition image of the images in the first and second photos. Eleven points are closely consistent with each other.


Figure 4. Plotting the anthropometrical points on the 3D (A) and 2D (B) facial images. Eleven points are closely consistent with each other.
C = superimposition image of A and B
Click image for larger version.

The selected points on the 3D and 2D facial images are superimposed on the basis of a standard point, and the reciprocal point-to-point differences between both images are compared (Figure 4C). The distance between the corresponding two anthropometrical points on both images is calculated from the coordinate values.


Experimental Study

The 3D facial data of 25 Japanese male examinees were obtained using the 3D physiognomic range finder. The 2D left oblique facial images of the examinees were taken with a digital still camera (Nikon, DS-505A, 50mm, f–1.4) at a distance from 1.5 m to 2.5 m. For evaluating the match of the 3D and 2D facial images of the same person, the 3D facial image of each examinee was compared to the 2D facial image ten times, yielding 250 superimpositions.

In the case of the different person, the 3D facial images of 25 examinees were each compared to the 2D facial images of the other 24 examinees, yielding 600 superimpositions. As shown in Figure 5, 16 points were selected from 18 points in this study. The selected points were plotted on the 3D and 2D facial images and then superimposed on the basis of the subnasale (Figure 6). Table 2 shows the reciprocal point-to-point difference on 16 points in Figure 6. The average distance obtained from 16 reciprocal point-to-point differences between both images was used as a matching criterion, and its threshold was determined.

To assess the propriety of the threshold for true positive, a model case in which the 2D facial image of one examinee is identified from the 3D facial images of 25 examinees was experimentally investigated. An oblique facial image of Examinee 2, which was taken with the digital still camera from 5 meters, was used as the target person (Figure 7). The quality of the 2D facial image was the same grade as images that have been submitted in actual cases. The 2D facial image of the target person was compared with each of the 3D facial images of 25 examinees.


Results

Descriptive statistics are shown in Table 3, including the average distance of the reciprocal points between the 3D and 2D facial images of the same persons in 25 examinees. The data shows that the measuring system for the reciprocal point-to-point differences including the determination of anthropometrical points was reproducible and reliable. 

Photo showing anthropometrical points on a 3D facial image. Sixteen anthropometrical points are used in this experimental study. The 16 points on this image are numbered, and a key is provided.

Figure 5. Anthropometrical points on the 3D facial image. Sixteen anthropometrical points are used in this experimental study.
1 = right entocanthion (r-en)
2 = left entocanthion (l-en)
3 = right ectocanthion (r-ex)
4 = left ectocanthion (l-ex)
5 = right alare (r-al)
6 = left alare (l-al)
7 = subnasale (sn)
8 = stomion (sto)
9 = right cheilion (r-ch)
10 = left cheilion (l-ch)
11 = right zygion (r- zy)
13 = right gonion (r-go)
15 = gnathion (gn)
16 = left superaurale (l-sa)
17 = left subaurale (l-sba)
18 = left tragion (l-t)

Table 4 shows the descriptive statistics for the average distance in the superimposition of both the same and different persons. The average distance in the superimposition of the same person ranged from 1.4 to 3.3. Meanwhile, the range of the average distance in the superimposition of the different person was 2.6 to 7.0. The mean value of the average distance was 2.3 for the same person and 4.7 for the different person, respectively. The difference of means between both cases was significant at the 0.001 level of confidence (t = 37.8, df = 848).

The false positive (FP)/false negative (FN) plots for the 3D and 2D facial image identification based on the average distance are shown in Figure 8. The average distance and percentage error at the FP/FN crossover point were 3.1 and 4.2 percent. In order to eliminate false positive identifications, the threshold of the average distance for true positive must be reduced to 2.5.

Table 5 shows the average distance in the superimposition image of 25 examinees in the model case. Although Examinees 2, 5, and 19 were included under the FP/FN crossover point, Examinee 2 showed the average distance under the threshold for true positive (Figure 9). Consequently, the 2D facial image of the target person was identified as Examinee 2 with scientific certainty. 


Discussion

Facial image identification is carried out to determine whether a facial image at the scene of a crime is that of a suspect. Although the morphological comparison of facial components is mainly used for identifying the facial image in case works, the orientation of the crime scene facial image is different from that of a suspect’s in most cases. Therefore, the examiner should consider any discrepancy in angulation when deciding whether the dissimilarity between facial components is real or because of differences in orientation. In that case, the indices based on facial measurements cannot be used as the indicator for comparing both images.

The computer-assisted facial image identification system using the 3D physiognomic range finder was developed to solve the problems described previously (Yoshino et al. 2000). In this system, the sinusoidal grating projection with the phase shift method was introduced to the measurement of the 3D morphology of the face, so that the operation time for obtaining 3D physiognomic data was reduced by one fourth compared with that of the old system (Yoshino et al. 1996). 

Series of three photos that show the superimposition of selected=


Figure 6. Superimposition of the selected anthropometrical points on the 3D and 2D facial images. 
A = 3D facial image  B = 2D facial image 
C = Superimposition image of A and B
Click image for larger version.



An x- and y-axis chart that shows the false positive (FP) and false negative (FN) plots for facial image identification. The average distance and percentage error at the FP/FN crossover points are 3.1 percent and 4.2 percent.


Figure 7. FP/FN plots for facial image identification. The average distance and percentage error at the FP/FN crossover point are 3.1 percent and 4.2 percent.
FP = false positive  FN = false negative



Photo showing a 2D facial image of the target person (Examinee 2).


Figure 8. The 2D facial image of the target person (Examinee 2).



Series of eight photos (two tiers of four photos) showing superimposition of the 2D facial image of the target person and the 3D facial images of exaiminees. The average distance is shown in the lower right corner of each photo. The examinees in the eight photos are (top row, left to right) Examinees 2, 5, 9, and 12 and (bottom row, left to right) Examinees 17, 19, 22, and 25.


Figure 9. Superimposition of the 2D facial image of the target person and the 3D facial images of examinees. The average distance is shown in the lower right corner of each facial image. Click image for larger version.
A = Examinee 2
C = Examinee 9
E = Examinee 17
G = Examinee 22
 
B = Examinee 5
D = Examinee 12
F = Examinee 19
H = Examinee 25
 

Thus, it was suggested that the 3D physiognomic range finder could be applied to the suspect (Yoshino et al. 2000). In this physiognomic range finder, the absolute range measurement could be obtained by the geometrical criteria between two cameras and one projector, producing the anthropometrical analysis. As shown in Table 1, the anthropological measurement of the 3D and 2D facial images could be quickly done on the display and their data compared.

Catterick (1992) applied the image-processing system to recognize facial photographs by two indices calculated from three midline facial measurements. He explained that the measurement data would objectively support the morphological findings, although the discriminating power based on facial measurements would be limited. The anthropometrical analysis would improve the reliability for the judgment of facial identification when sunglasses hide facial components such as eyes and eyebrows, as shown in Figure 4.

Bajnoczky and Kiralyfalvi (1995) used the difference between the coordinate values of eight- to twelve-pair anthropometrical points in both the skull and face for judging the match between the skull and facial images by the superimposition technique. They noted that their method is suitable for filtering out false positive identifications

In our study, the average distance obtained from 16 reciprocal point differences between the 3D and 2D facial images was used as the matching criterion. The average distance and percentage error at the FP/FN crossover point were 3.1 and 4.2 percent.

Although it is a fundamental requirement of forensic science that an identification method yields extremely high true positive and true negative decisions, it is also important that the method does not produce a high proportion of false positive identifications. As shown in the model case, two examinees were identified as false positive if the average distance at the FP/FN crossover point was used as the threshold. Considering this result and the previously described concept, the threshold of the average distance must be less than 2.5 to avoid false positive identifications. The facial image comparison using the reciprocal points matching has been reported reliable when the threshold of the average distance was 2.5.

In conclusion, this facial image identification system involving morphological comparison, anthropometrical analysis, and reciprocal points matching will provide accurate and reliable identification.

References

Bajnoczky, I. and Kiralyfalvi, L. A new approach to computer-aided comparison of skull and photograph, International Journal of Legal Medicine (1995) 108:157–161.

Catterick, T. Facial measurements as an aid to recognition, Forensic Science International (1992) 56:23–27.

Iscan, M. Y. Introduction to techniques for photographic comparison: Potential and problems. In: Forensic Analysis of the Skull. Eds. M. Y. Iscan and R. P. Helmer. Wiley-Liss, New York, 1993.

Kubota, S., Matsuda, H., Imaizumi, K., Miyasaka, S., and Yoshino, M. Anthropometric measurement and superimposition technique for facial image comparison using 3D morphologic analysis, Report of National Research Institute of Police Science (1997) 50:88–95 (in Japanese).

Linney, A. and Coombes, A. M. Computer modelling of facial form. In: Craniofacial Identification in Forensic Medicine. Eds. J. G. Clement and D. L. Ranson. Arnold, London, 1998.

Maples, W. R. and Austin, D. E. Photo/Video Superimposition in Individual Identification of the Living. Presented at the 44th Annual Meeting of American Academy of Forensic Sciences, New Orleans, Louisiana, February 17–22, 1992.

Proesmans, M. and Van Gool, L. Getting facial features and gestures in 3D. In: Face Recognition. Eds. H. Wechsler et al., pp. 288–309, Springer, Berlin, 1998.

Vanezis, P. and Brierley, C. Facial image comparison of crime suspects using video
superimposition, Science & Justice (1996) 36:27–33.

Yoshino, M., Kubota, S., Matsuda, H., Imaizumi, K., Miyasaka, S., and Seta, S. Face-to-face video superimposition using three dimensional physiognomic analysis, Japanese Journal of Science and Technology for Identification (1996) 1:11–20.

Yoshino, M., Matsuda, H., Kubota, S., Imaizumi, K., and Miyasaka, S. Computer-assisted facial image identification system using 3D physiognomic range finder, Forensic Science International (2000) 109:225–237.