Show simple item record

A Case for Fine-Grain Adaptive Cache Coherence

dc.date.accessioned2012-05-22T20:15:03Z
dc.date.accessioned2018-11-26T22:26:49Z
dc.date.available2012-05-22T20:15:03Z
dc.date.available2018-11-26T22:26:49Z
dc.date.issued2012-05-22
dc.identifier.urihttp://hdl.handle.net/1721.1/70909
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/70909
dc.description.abstractAs transistor density continues to grow geometrically, processor manufacturers are already able to place a hundred cores on a chip (e.g., Tilera TILE-Gx 100), with massive multicore chips on the horizon. Programmers now need to invest more effort in designing software capable of exploiting multicore parallelism. The shared memory paradigm provides a convenient layer of abstraction to the programmer, but will current memory architectures scale to hundreds of cores? This paper directly addresses the question of how to enable scalable memory systems for future multicores. We develop a scalable, efficient shared memory architecture that enables seamless adaptation between private and logically shared caching at the fine granularity of cache lines. Our data-centric approach relies on in hardware runtime profiling of the locality of each cache line and only allows private caching for data blocks with high spatio-temporal locality. This allows us to better exploit on-chip cache capacity and enable low-latency memory access in large-scale multicores.en_US
dc.format.extent11 p.en_US
dc.titleA Case for Fine-Grain Adaptive Cache Coherenceen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2012-012.pdf777.9Kbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record