000 05957nam a2200721 i 4500
001 7154565
003 IEEE
005 20200413152918.0
006 m eo d
007 cr cn |||m|||a
008 150725s2015 caua foab 000 0 eng d
020 _a9781627057684
_qebook
020 _z9781627057677
_qprint
024 7 _a10.2200/S00650ED1V01Y201505CAC033
_2doi
035 _a(CaBNVSL)swl00405303
035 _a(OCoLC)914432293
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aQA76.9.A73
_bC443 2015
082 0 4 _a004.22
_223
100 1 _aChen, Yu-Ting.,
_eauthor.
245 1 0 _aCustomizable computing /
_cYu-Ting Chen, Jason Cong, Michael Gill, Glenn Reinman, and Bingjun Xiao.
264 1 _aSan Rafael, California (1537 Fourth Street, San Rafael, CA 94901 USA) :
_bMorgan & Claypool,
_c2015.
300 _a1 PDF (xi, 106 pages) :
_billustrations.
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aSynthesis lectures on computer architecture,
_x1935-3243 ;
_v# 33
538 _aMode of access: World Wide Web.
538 _aSystem requirements: Adobe Acrobat Reader.
500 _aPart of: Synthesis digital library of engineering and computer science.
504 _aIncludes bibliographical references (pages 89-103).
505 0 _a1. Introduction --
505 8 _a2. Road map -- 2.1 Customizable system-on-chip design -- 2.1.1 Compute resources -- 2.1.2 On-chip memory hierarchy -- 2.1.3 Network-on-chip -- 2.2 Software layer --
505 8 _a3. Customization of cores -- 3.1 Introduction -- 3.2 Dynamic core scaling and defeaturing -- 3.3 Core fusion -- 3.4 Customized instruction set extensions -- 3.4.1 Vector instructions -- 3.4.2 Custom compute engines -- 3.4.3 Reconfigurable instruction sets -- 3.4.4 Compiler support for custom instructions --
505 8 _a4. Loosely coupled compute engines -- 4.1 Introduction -- 4.2 Loosely coupled accelerators -- 4.2.1 Wire-speed processor -- 4.2.2 Comparing hardware and software LCA management -- 4.2.3 Utilizing LCAs -- 4.3 Accelerators using field programmable gate arrays -- 4.4 Coarse-grain reconfigurable arrays -- 4.4.1 Static mapping -- 4.4.2 Run-time mapping -- 4.4.3 CHARM -- 4.4.4 Using composable accelerators --
505 8 _a5. On-chip memory customization -- 5.1 Introduction -- 5.1.1 Caches and buffers (scratchpads) -- 5.1.2 On-chip memory system customizations -- 5.2 CPU cache customizations -- 5.2.1 Coarse-grain customization strategies -- 5.2.2 Fine-grain customization strategies -- 5.3 Buffers for accelerator-rich architectures -- 5.3.1 Shared buffer system design for accelerators -- 5.3.2 Customization of buffers inside an accelerator -- 5.4 Providing buffers in caches for CPUs and accelerators -- 5.4.1 Providing software-managed scratchpads for CPUs -- 5.4.2 Providing buffers for accelerators -- 5.5 Caches with disparate memory technologies -- 5.5.1 Coarse-grain customization strategies -- 5.5.2 Fine-grain customization strategies --
505 8 _a6. Interconnect customization -- 6.1 Introduction -- 6.2 Topology customization -- 6.2.1 Application-specific topology synthesis -- 6.2.2 Reconfigurable shortcut insertion -- 6.2.3 Partial crossbar synthesis and reconfiguration -- 6.3 Routing customization -- 6.3.1 Application-aware deadlock-free routing -- 6.3.2 Data flow synthesis -- 6.4 Customization enabled by new device/circuit technologies -- 6.4.1 Optical interconnects -- 6.4.2 Radio-frequency interconnects -- 6.4.3 RRAM-based interconnects --
505 8 _a7. Concluding remarks -- Bibliography -- Authors' biographies.
506 1 _aAbstract freely available; full-text restricted to subscribers or individual document purchasers.
510 0 _aCompendex
510 0 _aINSPEC
510 0 _aGoogle scholar
510 0 _aGoogle book search
520 3 _aSince the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
530 _aAlso available in print.
588 _aTitle from PDF title page (viewed on July 25, 2015).
650 0 _aComputer architecture.
653 _aaccelerator architectures
653 _amemory architecture
653 _amultiprocessor interconnection
653 _aparallel architectures
653 _areconfigurable architectures
653 _amemory
653 _agreen computing
700 1 _aCong, Jason.,
_eauthor.
700 1 _aGill, Michael.,
_eauthor.
700 1 _aReinman, Glenn.,
_eauthor.
700 1 _aXiao, Bingjun.,
_eauthor.
776 0 8 _iPrint version:
_z9781627057677
830 0 _aSynthesis digital library of engineering and computer science.
830 0 _aSynthesis lectures in computer architecture ;
_v# 33.
_x1935-3243
856 4 2 _3Abstract with links to resource
_uhttp://ieeexplore.ieee.org/servlet/opac?bknumber=7154565
999 _c562145
_d562145