This is the third post in the series on Iris acquisition for biometrics. In the first and the second posts we saw that, at least in theory, iris recognition is an ideal biometric, and we went through some of the desirable properties of an iris acquisition system. However, currently most iris recognition systems require a single subject to stand (or move slowly) at a certain standoff distance from the camera in order capture and process iris images. Wouldn’t it be nice if iris recognition could be simultaneously performed for a group of people who may be standing/ moving within a large volume? Such systems could potentially be used in crowded places such as airports, stadiums, railway stations etc.
In this post, we will look at one of the limitations of current iris recognition systems – the limited depth of field, the fundamental cause of this limitation, and how some of the current systems are addressing this problem.
The problem of DOF
The inability of any conventional imaging system to capture sharp images within a large volume is illustrated in the Figure 1.
Perfect imaging corresponds to the ability of an imager to produce a scaled replica of an object in the image space . When only a small portion of the light wave emerging from an infinitesimally small point source of light is collected through a finite opening of a camera’s aperture (Figure 2 (a)), the replica in the image space is not exact even in the absence of aberrations; instead, the image of the point spreads out in space due to diffraction at the aperture. This dispersed response in the three-dimensional image space is called Point Spread Function (PSF). The spreading of the PSF along the transverse (xy-axis) direction (a 2D PSF) restricts an imager’s ability to resolve fine details (spatial frequency) in the image. For an extended object, which is made of several points, the 2D PSF smears the responses from neighboring points into each other causing blur. Similarly, the spread along the longitudinal direction (z-axis) limits the ability to discriminate points staggered closely in the direction of the optical axis causing a region of uncertainty; however, the extension of the 3D PSF along the optical axis enables multiple spatially-separated objects (or points) within a volume in the object space to form acceptably sharp images at once. Conversely, an (point) object in the object space may be placed anywhere within this zone and still form a satisfactory image. This zone of tolerance in the object space is called depth of field. The corresponding zone in the image space is called depth of focus . In this post, the acronym “DOF” is used for both depth of field and depth of focus wherever its meaning is apparent from the context. In the image space, the DOF is defined as the region of the 3D PSF where the intensity is above 80% of the central maximum [3,4]. This zone is in the shape of a prolate spheroid. In the absence of aberrations, the maximum intensity occurs at the geometric focal point, , where contributions from all parts of the pupil are all in phase. Figure 2 (b) shows the aberration-free intensity distribution, , as a function of defocus about the geometric focal point for a light source placed at 100 millimeters from a lens of focal-length of 25 mm and aperture diameter of 5 mm. The expression for the distribution—normalized to make equal to unity—is obtained using scalar diffraction theory and paraxial assumptions.
The shape—length and breadth—of the 80% intensity region (Figure 2(b)) dictates the quality of the image acquired by an imager in terms of lateral spatial resolution and DOF.
A first order optical simulation demonstrating the effect of the DOF in image acquisition at varying depths is shown in Figure 3. For this simulation a 100 mm focal length, f/5 lens that is focused at 1300 mm is used. In this setup, the imager has a DOF of 9.5 mm in the object space (calculated by applying the lens equation to the extremes of the DOF in image space). As it can be seen, the irises located outside the DOF region are severely blurred. It has been shown in [5,6] that the performance of iris recognition deteriorates quickly with increasing amounts of defocus in the captured iris images.
In conventional imaging systems, increase of DOF is realized by making the system aperture smaller. However, stopping down the aperture to increase DOF is not a good solution for iris recognition as the increase in DOF is also accompanied by a loss in optical resolution and loss of light. As shown in Figure 4 decreasing the size of the aperture results in elongation of the PSF along the optical axis, which results in larger DOF in the object space; however the PSF along the transverse direction also increases, resulting in the loss of optical spatial resolution. The relation between DOF and lateral optical resolution () is as follows:
The above equation also suggests that an n-fold increase in DOF results in exactly n-fold loss of light . The loss of light results in a decrease in system SNR.
There are various techniques for increasing the capture volume for iris acquisition, although none are probably perfect right now. Examples include use of multiple cameras (using both time and spatial multiplexing), large telescopic lenses, pan-tilt-zoom camera systems, and computational imaging techniques such as wave-front coding. Figure 5 is a schematic showing some of the state of the art iris acquisition systems and their volume.
(The figures in the post were generated using Matplotlib (Python plotting library), Mayavi (Python 3D plotting library) and Blender.
Links to post in this series
- Primer on iris recognition
- Desirable properties of iris acquisition systems
- *The DOF problem in iris acquisition systems
- J. W. Goodman, Introduction to Fourier Optics (Roberts & Co., 2005).
- Z. Zalevsky, “Extended depth of focus imaging: a review,” 018001–018001 (2010).
- M., Bhatia, Avadh Behari, Wolf, Emil Born, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light(Cambridge Univ. Press, 2010).
- H. Gross, Handbook of Optical Systems: Aberration Theory and Correction of Optical Systems (Wiley-VCH, 2007).
- N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Image quality assessment for iris biometric,” in Defense and Security Symposium (2006), p. 62020D–62020D.
- N. Sazonova, S. Schuckers, P. Johnson, P. Lopez-Meyer, E. Sazonov, and L. Hornak, “Impact of out-of-focus blur on iris recognition,” in SPIE Defense, Security, and Sensing (2011), p. 80291S–80291S.
- W. T. Welford, “Use of Annular Apertures to Increase Focal Depth,” 749–752 (1960).
Pingback: Desirable properties of iris acquisition systems | Indranil's world
Pingback: Primer on iris recognition | Indranil's world