期刊名称:International Journal of Computer Science & Information Technology (IJCSIT)
印刷版ISSN:0975-4660
电子版ISSN:0975-3826
出版年度:2016
卷号:8
期号:6
页码:49
出版社:Academy & Industry Research Collaboration Center (AIRCC)
摘要:The increasing number of threads inside the cores of a multicore processor, and competitive access to theshared cache memory, become the main reasons for an increased number of competitive cache misses andperformance decline. Inevitably, the development of modern processor architectures leads to an increasednumber of cache misses. In this paper, we make an attempt to implement a technique for decreasing thenumber of competitive cache misses in the first level of cache memory. This technique enables competitiveaccess to the entire cache memory when there is a hit – but, if there are cache misses, memory data (byusing replacement techniques) is put in a virtual part given to threads, so that competitive cache misses areavoided. By using a simulator tool, the results show a decrease in the number of cache misses andperformance increase for up to 15%. The conclusion that comes out of this research is that cache missesare a real challenge for future processor designers, in order to hide memory latency.