next up previous contents
Next: 13.4 Image Warping Up: 13.3 Convolutions Previous: 13.3.5.6.2 Vertical

13.3.6 Correlation and Feature Detection

The correlation operation is defined mathematically as:


 \begin{displaymath}
h(x) = f(x) \circ g(x) = \int^{+\infty}_{-\infty} f^{\ast}(\tau)g(x + \tau)d\tau
\end{displaymath} (16)

The $f^{\ast}(\tau)$ is the complex conjugate of $f(\tau)$, but since this section will discuss correlation for signals which only contain real values, substitute $f(\tau)$.

Correlation is useful for feature detection; applying correlation to an image that possibly contains a target feature and an image of that feature forms local maxima or pixel value ``spikes'' in candidate positions. This is useful in detecting letters on a page, or the position of armaments on a battlefield. Correlation can also be used to detect motion, such as the velocity of hurricanes in a satellite image or the jittering of an unsteady camera.

For two-dimensional discrete images, you may use Equation 15 to evaluate correlation.

The convolution extension (EXT_convolution) in OpenGL may be used to apply correlation to an image, but only for features no larger than the maximum convolution kernel size. For larger images or platforms which do not supply the convolution extension, use the accumulation buffer technique for convolution. (It is worth the effort to consider an alternative method, such as applying a multiplication in the frequence domain [35], if your feature and candidate images are very large.)

Once you have applied convolution, your application will need to find the ``spikes'' to determine where features have been detected. To aid this process, it may be useful to apply thresholding with a color table (SGI_color_table) to convert candidates pixels to one value and non-candidates to another.

One method used for finding features uses the following steps:

If your candidate image comes from a source other than the OpenGL color buffer, use glDrawPixels() to apply the pixel transfer pipeline to your image.

If features in the candidate image are not pixel-exact, for example if they are rotated slightly or blurred, it may be necessary to create the feature image using jittering and blending, and then lower the acceptance threshold in the color table.


next up previous contents
Next: 13.4 Image Warping Up: 13.3 Convolutions Previous: 13.3.5.6.2 Vertical
David Blythe
1999-08-06