image fusion


With the development of new imaging sensors arises the need of a meaningful combination of all employed imaging sources. The actual fusion process can take place at different levels of information representation, a generic categorization is to consider the different levels as, sorted in ascending order of abstraction: signal, pixel, feature and symbolic level. This site focuses on the so-called pixel level fusion process, where a composite image has to be built of several input images.

To date, the result of pixel level image fusion is considered primarily to be presented to the human observer, especially in image sequence fusion (where the input data consists of image sequences). A possible application is the fusion of forward looking infrared (FLIR) and low light visible images (LLTV) obtained by an airborne sensor platform to aid a pilot navigate in poor weather conditions or darkness.

In pixel-level image fusion, some generic requirements can be imposed on the fusion result:

  • The fusion process should preserve all relevant information of the input imagery in the composite image (pattern conservation)
  • The fusion scheme should not introduce any artifacts or inconsistencies which would distract the human observer or following processing stages
  • The fusion process should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object the input imagery

In case of image sequence fusion arises the additional problem of temporal stability and consistency of the fused image sequence. The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or time depended contrast changes introduced by the fusion process are highly distracting to the human observer. So, in case of image sequence fusion the two additional requirements apply:

  • Temporal stability: The fused image sequence should be temporal stable, i.e. graylevel changes in the fused sequence must only be caused by graylevel changes in the input sequences, they must not be introduced by the fusion scheme itself;
  • Temporal consistency: Graylevel changes occurring in the input sequences must be present in the fused sequence without any delay or contrast change.