摘要:In remote sensing, image fusion is the process of blending two images to obtain finer details of the fused image. In this paper, an enhanced dictionary-based sparse representation (EDSR) is proposed for multitemporal image fusion. Multitemporal remote satellite images acquired on the same geographical area at different acquisition dates are merged to obtain a fused image for further analysis. Sparse representation of the image is employed in the approximation and representation of the target image. In order to improve the performance of the fusion process, a locally adaptive dictionary is created such that the dictionary contains patches extracted from both source images. The reconstruction of the image is performed using maximum absolute coefficients through the learned dictionary. The proposed EDSR technique has been compared quantitatively and qualitatively with the existing techniques, such as PCA, DWT, SWT, Ehlers, and sparse representation (SR), to evaluate its performance. The EDSR performs well in mutual information (MI) with 3.4742 and feature mutual information (FMI) with 0.4790 and provides better results in the case of degree of distortion, UIQI, and ERGAS for dataset 1 than the existing fusion methods. Experimental results on LANDSAT images revealed that the proposed technique is more effective in terms of preservation of spectral information, errors, color, and visual quality of the fused product.