SPIE Web

OE Reports

OE Reports

September 1998

Smart Cameras
Neural Robot


U.S. Fusion Policy
Low-Mass Muscle Actuators
Robots Pack Poultry
Light Constructions


SPIE Scene
San Diego '98
Editorial
Education


Industry Focus

Product Briefs
Business Briefs
Tech Transfer


International Technical Groups

Robotics & Machine Perception
Optical Processing and Computing
Holography
Optomechanical & Precision Instrument Design


SPIE - The International Society for Optical Engineering

OE Reports 177 - September 1998

Light Constructions

Three-dimensional Joint-Transform Correlator demonstrated

by Sunny Bains

A scientist in Israel has extended the optical correlator from operation in two dimensions to three. The technique involves fusing images of the object from many different points of view, and allows objects to be identified and located in 3D space. This removes some of the ambiguity inherent in the current generation of 2D optical correlators. In particular, the device should not be confused by similar-shaped, differently-sized objects that appear to be the same size purely because they are different distances away.

An optical correlator is a device that determines whether two images are similar or the same by comparing their Fourier transforms, which are generated by presenting a collimated laser image to a lens. Though these devices are proving useful in their two-dimensional form, the fact that they are scale and orientation dependent has made them less than ideal for some applications. Though the new system still has the problem of orientation-dependence, the scale issue becomes considerably less of an issue in the 3D case.

 

 

 

 

 

 

Figure 1

Figure 1. Schematic for the propose hybrid (electronic/optical) 3D correlator with results from the existing digital-only proof-of-principle system.

The new correlator, developed at Ben-Gurion University of the Negev, Beer-Sheva, works as follows.1 A charge-coupled device (CCD) camera (or a series of similar cameras) is first used to take pictures of both the scene to be examined and the reference object. If there are several cameras, each is aimed at the same point in the 3D scene from a different direction. With one camera, a series of images is taken sequentially, with the position and point of view changing in between. In either case, the resulting images are Fourier-transformed, at least in principle, using a lens. (In fact, in the experiments soon to be described,2 the Fourier transforms were performed digitally because no spatial light modulator was available to impart the images onto light).

The next stage involves taking the 2D transforms and getting a 3D transform out of them. One way to think of this process is to imagine the most obvious way to get a 3D Fourier transform. This would involve first taking "slice" images through the depth of the scene in question and performing a 2D FT on each of those. After that, a one-dimensional FT would have to be carried out on the set of corresponding pixels from each image. Here you would take, for example, the top right pixel from each depth slice, line them all up in a row, and perform the FT. Then you would proceed with the next pixel in the image. When every "depth line" has been transformed in this way, the 3D transform is finished. The reason the last step is necessary in this case is because no depth information (no information corresponding to that direction) is available via the 2D slices.

In the system demonstrated by Rosen, however, the situation is very different. Obviously the cameras cannot take images through the scene unless it is transparent. Instead the images captured are projections from the scene, projections that intersect. Between them, these projections contain information about all three dimensions: they simply have to be placed into the correct context in 3D space. This is done using a coordinate transformation, performed electronically.

Performing the FT is just the first half of correlating a reference image with a target scene. After the two have been jointly transformed -- hence the name "Joint Transform Correlator" -- they have to be transformed back in order to produce a correlation peak. Though Rosen has successfully demonstrated this technique using an entirely digital correlator, he believes that the system can be improved by using optics (see figure). In his new scheme, 2D FTs would be formed optically at both ends of the system, with the coordinate transformation and a 1D FT (to make 3D on the output end) performed electronically. He hopes to build such a hybrid demonstrator soon.


photo

Sunny Bains is a scientist and writer based in the San Francisco Bay area.

References

  1. Joseph Rosen, "Three-dimensional joint transform correlator," Applied Optics, to appear November 1998.
  2. Joseph Rosen, "Three-dimensional electro-optical correlation," JOSA A 15 (2), February 1998.


This Issue Home | OE Reports Home | SPIE Web Home

© 1998 SPIE - The International Society for Optical Engineering