摘要:Many works have been proposed on image saliency detection to handle challenging issues including low illumination, cluttered background, low contrast, and so on. Although good performance has been achieved by these algorithms, detection results are still poor based on RGB modality. Inspired by the recent progress of multi-modality fusion, we propose a novel RGB-thermal saliency detection algorithm through learning static-adaptive graphs. Specifically, we first extract superpixels from the two modalities and calculate their affinity matrix. Then, we learn the affinity matrix dynamically and construct a static-adaptive graph. Finally, the saliency maps can be obtained by a two-stage ranking algorithm. Our method is evaluated on RGBT-Saliency Dataset with eleven kinds of challenging subsets. Experimental results show that the proposed method has better generalization performance. The complementary benefits of RGB and thermal images and the more robust feature expression of learning static-adaptive graphs create an effective way to improve the detection effectiveness of image saliency in complex scenes.