Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Multi-core cache hierarchies

By: Balasubramonian, Rajeev.
Contributor(s): Jouppi, Norman P | Muralimanohar, Naveen.
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures on computer architecture: # 17.Publisher: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, c2011Description: 1 electronic text (xiv, 137 p.) : ill., digital file.ISBN: 9781598297546 (electronic bk.).Subject(s): Cache memory | Computer architecture | Computer architecture | Multi-core processors | Cache hierarchies | Shared and private caches | Non-uniform cache access (NUCA) | Quality-of-service | Cache partitions | Replacement policies | Memory prefetch | On-chip networks | Memory cellsDDC classification: 621.39732 Online resources: Abstract with links to resource Also available in print.
Contents:
Preface -- Acknowledgments --
1. Basic elements of large cache design -- Shared vs. private caches -- Shared LLC -- Private LLC -- Workload analysis -- Centralized vs. distributed shared caches -- Non-uniform cache access -- Inclusion --
2. Organizing data in CMP last level caches -- Data management for a large shared NUCA cache -- Placement/migration/search policies for D-NUCA -- Replication policies in shared caches -- OS-based page placement -- Data management for a collection of private caches -- Discussion --
3. Policies impacting cache hit rates -- Cache partitioning for throughput and quality-of-service -- Introduction -- Throughput -- QoS policies -- Selecting a highly useful population for a large shared cache -- Replacement/insertion policies -- Novel organizations for associativity -- Block-level optimizations -- Summary --
4. Interconnection networks within large caches -- Basic large cache design -- Cache array design -- Cache interconnects -- Packet-switched routed networks -- The impact of interconnect design on NUCA and UCA caches -- NUCA caches -- UCA caches -- Innovative network architectures for large caches --
5. Technology -- Static-RAM limitations -- Parameter variation -- Modeling methodology -- Mitigating the effects of process variation -- Tolerating hard and soft errors -- Leveraging 3D stacking to resolve SRAM problems -- Emerging technologies -- 3T1D RAM -- Embedded DRAM -- Non-volatile memories --
6. Concluding remarks -- Bibliography -- Authors' biographies.
Abstract: A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE354
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 119-136).

Preface -- Acknowledgments --

1. Basic elements of large cache design -- Shared vs. private caches -- Shared LLC -- Private LLC -- Workload analysis -- Centralized vs. distributed shared caches -- Non-uniform cache access -- Inclusion --

2. Organizing data in CMP last level caches -- Data management for a large shared NUCA cache -- Placement/migration/search policies for D-NUCA -- Replication policies in shared caches -- OS-based page placement -- Data management for a collection of private caches -- Discussion --

3. Policies impacting cache hit rates -- Cache partitioning for throughput and quality-of-service -- Introduction -- Throughput -- QoS policies -- Selecting a highly useful population for a large shared cache -- Replacement/insertion policies -- Novel organizations for associativity -- Block-level optimizations -- Summary --

4. Interconnection networks within large caches -- Basic large cache design -- Cache array design -- Cache interconnects -- Packet-switched routed networks -- The impact of interconnect design on NUCA and UCA caches -- NUCA caches -- UCA caches -- Innovative network architectures for large caches --

5. Technology -- Static-RAM limitations -- Parameter variation -- Modeling methodology -- Mitigating the effects of process variation -- Tolerating hard and soft errors -- Leveraging 3D stacking to resolve SRAM problems -- Emerging technologies -- 3T1D RAM -- Embedded DRAM -- Non-volatile memories --

6. Concluding remarks -- Bibliography -- Authors' biographies.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers.

Also available in print.

Title from PDF t.p. (viewed on June 18, 2011).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha