Image Fusion
Image Fusion is the process of combining two or more input images into a single image. The main reason for combining the images is to get a more informative output image. In mobile, dual camera Image Fusion comes into play in several ways: The first is related to a dual camera with one color sensor and another monochromatic sensor (with the Bayer filter removed). The monochromatic sensor captures 2.5 times more light, thus reaching better resolution and SNR. By fusing the images coming from both cameras, the output image has better SNR and resolution, especially in low light conditions. The second is with a zoom dual camera – a wide field-of-view camera coupled with a telephoto narrow field-of-view camera. In this case, Image Fusion will also improve the SNR and resolution from no zoom up to the point the telephoto camera field-of-view is the dominant one. In the following example images, it is easy to see the resolution improvement in the fused image vs the standard digital zoom (images were taken with a 3x optical zoom camera).
Image fusion methods can be broadly classified into two groups – spatial domain fusion and transform domain fusion.
The fusion methods such as averaging, Brovey method, principal component analysis (PCA), and IHS-based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high-pass filtering-based technique. Here the high-frequency details are injected into an upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problems. Spatial distortion can be very well handled by frequency-domain approaches to image fusion. Multiresolution analysis has become a very useful tool for analyzing remote sensing images. The discrete wavelet transform has become a very useful tool for fusion. Some other fusion methods are also there, such as Laplacian pyramid-based, curvelet transform based, etc. These methods show a better performance in the spatial and spectral quality of the fused image compared to other spatial methods of fusion.
The images used in image fusion should already be registered. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are: