# The DOF problem in iris acquisition systems

This is the third post in the series on Iris acquisition for biometrics. In the first and the second posts we saw that, at least in theory, iris recognition is an ideal biometric, and we went through some of the desirable properties of an iris acquisition system. However, currently most iris recognition systems require a single subject to stand (or move slowly) at a certain standoff distance from the camera in order capture and process iris images. Wouldn’t it be nice if iris recognition could be simultaneously performed for a group of people who may be standing/ moving within a large volume? Such systems could potentially be used in crowded places such as airports, stadiums, railway stations etc.

In this post, we will look at one of the limitations of current iris recognition systems – the limited depth of field, the fundamental cause of this limitation, and how some of the current systems are addressing this problem.

## The problem of DOF

The inability of any conventional imaging system to capture sharp images within a large volume is illustrated in the Figure 1.

Figure 1 Depth of field (DOF) problem. Image of the three human-figure cut-outs with sinusoidal patterns (2 lp/mm) and artificial irises and placed apart by 11 cm from each other. The camera, with lens of 80 mm focal length and f/5 aperture, was focused on the middle cut-out (3.6 meters away from the camera). It is evident that the spatial resolution in the image falls off rapidly with increasing distance from the plane of sharp focus (middle cut-out) inhibiting the camera from resolving fine details uniformly across the imaging volume.

Perfect imaging corresponds to the ability of an imager to produce a scaled replica of an object in the image  space [1].  When only a small portion of the light  wave emerging  from an infinitesimally  small point source of light is collected through a finite opening of a camera’s aperture (Figure 2 (a)), the replica in the image space is not exact even in the absence of aberrations; instead, the image of the point spreads out in space due to diffraction at the aperture. This dispersed response in the three-dimensional image space is called Point Spread Function (PSF).  The spreading of the PSF along   the transverse (xy-axis) direction (a 2D PSF) restricts an imager’s ability to resolve fine details (spatial frequency) in the image. For an extended object, which is made of several points, the 2D PSF smears the responses from neighboring points into each other causing blur. Similarly, the spread along the longitudinal direction (z-axis) limits the ability to discriminate points staggered closely in the direction of the optical axis causing a region of uncertainty; however, the extension of the 3D PSF along the optical axis enables multiple spatially-separated objects (or points) within a volume in the object space to form acceptably sharp images at once.  Conversely, an (point) object in the object space may be placed anywhere within this zone and still form a satisfactory image. This zone of tolerance in the object space is called depth of field. The corresponding zone in the image space is called depth of focus [2]. In this post, the acronym “DOF” is used for both depth of field and depth of focus wherever its meaning is apparent from the context. In the image space, the DOF is defined as the region of the 3D PSF where the intensity is above 80% of the central maximum [3,4]. This zone is in the shape of a prolate spheroid. In the absence of aberrations, the maximum intensity occurs at the geometric focal point, $z_g$, where contributions from all parts of the pupil are all in phase. Figure 2 (b) shows the aberration-free intensity distribution, $I_n(r, \delta z)$, as a function of defocus $\delta z = z_i - z_g$ about the geometric focal point for a light source placed at 100 millimeters from a lens of focal-length of 25 mm and aperture diameter of 5 mm. The expression for the distribution—normalized to make $I_n(0,0)$ equal to unity—is obtained using scalar diffraction theory and paraxial assumptions.

Figure 2 Incoherent impulse response and DOF. (a) The image A’ of a point source A spreads out in space forming a zone of tolerance called Depth of Focus (DOF) in the image space; (b) The normalized focal intensity distribution of the 3D PSF of a 25mm, f/5 lens imaging an axial point source at a distance of 100mm. The expression for the 3D PSF was obtained for a circular aperture using scalar diffraction theory and paraxial assumption. The DOF, having prolate spheroidal shape, is defined as the region within which the intensity has above 80% of the intensity at the geometric focus point. The figure shows iso-surfaces representing 0.8, 0.2, 0.05 and 0.01 intensity levels. The ticks on the left vertical side indicate the locations of the first zeroes of the Airy pattern in the focal plane. The vertical axis has been exaggerated by 10 times in order to improve the display of the distribution.

The shape—length and breadth—of the 80% intensity region (Figure 2(b)) dictates the quality of the image acquired by an imager in terms of lateral spatial resolution and DOF.

A first order optical simulation demonstrating the effect of the DOF in image acquisition at varying depths is shown in Figure 3. For this simulation a 100 mm focal length, f/5 lens that is focused at 1300 mm is used. In this setup, the imager has a DOF of 9.5 mm in the object space (calculated by applying the lens equation to the extremes of the DOF in image space). As it can be seen, the irises located outside the DOF region are severely blurred. It has been shown in [5,6] that the performance of iris recognition deteriorates quickly with increasing amounts of defocus in the captured iris images.

Figure 3 First order simulation of iris acquisition at multiple depths. The letters A, B &C (A’, B’&C’) denote both sources (images) and positions. The 1st row depicts three sources at three depths from a 100mm, f/5 lens. Point A (in focus) forms image A’. The 2nd row shows a 12mm iris in the object space (left) and the |3D PSF|^2 of the source A (right). The |3D PSF|^2 for objects B and C are very similar to that of A since their relative separation is trivial compared to their distances from the lens. The image plane, at geometric focus, senses the 3D PSF at A’, B’ & C’ respectively for points A, B & C. Positions B’ & C’, which are outside the DOF region in the image space, are 123.2 and 177.3 microns from A’ respectively. The corresponding incoherent 2D PSFs are shown in the 3rd row. The iris images in the 3rd row were obtained by convolving the incoherent 2D PSFs with the de-magnified iris image in 2nd row.

In conventional imaging systems, increase of DOF is realized by making the system aperture smaller. However, stopping down the aperture to increase DOF is not a good solution for iris recognition as the increase in DOF is also accompanied by a loss in optical resolution and loss of light. As shown in Figure 4 decreasing the size of the aperture results in elongation of the PSF along the optical axis, which results in larger DOF in the object space; however the PSF along the transverse direction also increases, resulting in the loss of optical spatial resolution. The relation between DOF and lateral optical resolution ($r_o$) is as follows:

$r_o = \frac{4 F/\#}{DOF_{Diffraction}}$

The above equation also suggests that an n-fold increase in DOF results in exactly n-fold loss of light [7]. The loss of light results in a decrease in system SNR.

Figure 4 Effect of aperture size on DOF and lateral resolution. The 1st row depicts the DOF associated with an imager with F/#=5 (left). The 0.8, 0.2, 0.05, and 0.01 iso-surfaces of the three dimensional normalized intensity distribution (middle) in the focal region of an aberration-free lens with diameter of 20 mm and the 2D intensity PSF (right) which is the cross-section in the focal plane (\delta_z = 0). The corresponding distributions for a lens with a diameter of 10 mm are shown in the 2nd row. It can be seen that by halving the aperture diameter (doubling the F-number, F/#) the DOF increased four folds, which are indicated by the red triangle markers on the horizontal axis; however, the diffraction spot-size also increased two folds consequently halving the spatial resolution.

There are various techniques for increasing the capture volume for iris acquisition, although none are probably perfect right now. Examples include use of multiple cameras (using both time and spatial multiplexing), large telescopic lenses, pan-tilt-zoom camera systems, and computational imaging techniques such as wave-front coding. Figure 5 is a schematic showing some of the state of the art iris acquisition systems and their volume.

Figure 5 A visual representation of the capture volumes of few iris recognition systems. The projection of the capture volumes on the floor provides an estimate of the capture volume’s length and standoff distances. The capture volumes of systems which use PTZ and multi-camera arrangement are shown in shades of green.
[An orthographic camera viewpoint has been used in the rendition in order to avoid distortions due to perspective foreshortening. The human model in the figure is licensed under a Creative Commons Attribution 3.0 United States License and is Copyright © 2003-2013 Andrew Kator & Jennifer Legaz.]

(The figures in the post were generated using Matplotlib (Python plotting library), Mayavi (Python 3D plotting library) and Blender

Links to post in this series

References

1. J. W. Goodman, Introduction to Fourier Optics (Roberts & Co., 2005).
2. Z. Zalevsky, “Extended depth of focus imaging: a review,” 018001–018001 (2010).
3. M., Bhatia, Avadh Behari, Wolf, Emil Born, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light(Cambridge Univ. Press, 2010).
4. H. Gross, Handbook of Optical Systems: Aberration Theory and Correction of Optical Systems (Wiley-VCH, 2007).
5. N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Image quality assessment for iris biometric,” in Defense and Security Symposium (2006), p. 62020D–62020D.
6. N. Sazonova, S. Schuckers, P. Johnson, P. Lopez-Meyer, E. Sazonov, and L. Hornak, “Impact of out-of-focus blur on iris recognition,” in SPIE Defense, Security, and Sensing (2011), p. 80291S–80291S.
7. W. T. Welford, “Use of Annular Apertures to Increase Focal Depth,” 749–752 (1960).