期刊名称:International Journal of Innovative Research in Science, Engineering and Technology
印刷版ISSN:2347-6710
电子版ISSN:2319-8753
出版年度:2015
卷号:4
期号:3
页码:1489
DOI:10.15680/IJIRSET.2015.0403096
出版社:S&S Publications
摘要:Image re-ranking, is an effective way to improve the results of web-based image search and has beenadopted by current commercial search engines such as Bing and Google. When a query keyword is given, a list ofimages are first retrieved based on textual information given by the user. By asking the user to select a query imagefrom the pool of images, the remaining images are re-ranked based on their index with the query image. A majorchallenge is that sometimes semantic meanings may interpret user’s search intention. Many people recently proposed tomatch images in a semantic space which used attributes or reference classes closely related to the semantic meanings ofimages as basis. In this paper, we propose a novel image re-ranking framework, in which automatically offline learnsdifferent semantic spaces for different query keywords and displays with the image details in the form of augmentedimages. The images are projected into their related semantic spaces to get semantic signatures with the help of one clickfeedback from the user. At the online stage, images are re-ranked by comparing their semantic signatures obtained fromthe semantic space specified by the query keyword given by the user. The proposed query-specific semantic signaturessignificantly improve both the accuracy and efficiency of image re-ranking. Experimental results show that 25-40percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods.