期刊名称:International Journal of Computer Science and Information Technologies
电子版ISSN:0975-9646
出版年度:2016
卷号:7
期号:5
页码:2256-2260
出版社:TechScience Publications
摘要:In this present modern era the general imagecollections cannot be handled efficiently on one device due tothe fact that the image size being large, high computationalcosts and performance overhead of modern image processingalgorithms. Therefore, image processing often requiresdisseminated (distributed) computing. But practically,distributed computing is a very complicated subject thatdemands strong technical knowledge in that area and oftencannot be used by researchers who develop this imageprocessing algorithms. A framework is needed that allows theresearchers to concentrate on the image processing tasks andhides them from the complicated details of distributedcomputing. Additionally, the framework should provide theresearchers with the familiar image processing tools. Thepaper describes the usage of MapReduce paradigmframework that provides the ability to divide the image intoclusters for distributed processing. The basis allows theimprovement of image processing by using traditionalmethods. The purpose for selecting Hadoop is the technologyand its timely delivery and cost-effectiveness, which is now akey constituent for success in many government endeavors,businesses, scientific and engineering disciplines. The Hadoopsoftware stack that was used consists of an extensibleMapReduce execution engine, along with pluggabledistributed storage engines, and a range of procedures fordeclarative interfaces—is a popular choice in many industries.Tests have shown that this method is ascendable and efficientin handling multiple large images used mostly for remotesensing applications, and the variance between the single PCruntime and the Hadoop runtime is clearly appreciable