摘要:To take full advantage of global features of source images, we propose an image fusion method based on adaptive unit-linking pulse coupled neural networks (ULPCNNs) in the contourlet domain. Considering that each high-frequency subband after the contourlet decomposition has rich directional information, we employ directional contrast of each coefficient as the external stimulus to inspire each neuron. Linking range is also related to the contrast in order to adaptively improve the global coupling characteristics of ULPCNNs. In this way, biological activity of human visual systems to detailed information of images can be simulated by the output pulses of the ULPCNNs. The first firing time of each neuron is utilized to determine the fusion rule for corresponding detailed coefficients. Experimental results indicate the superiority of our proposed algorithm, for multifocus images, remote sensing images, and infrared and visible images, in terms of visual effects and objective evaluations.