Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Deep learning for computer architects /

By: Reagen, Brandon [author.].
Contributor(s): Adolf, Robert [author.] | Whatmough, Paul [author.] | Wei, Gu-Yeon [author.] | Brooks, David 1975 May 23-, [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures in computer architecture: # 41.Publisher: [San Rafael, California] : Morgan & Claypool, 2017.Description: 1 PDF (xiv, 109 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627059855.Subject(s): Machine learning | Neural networks (Computer science) | Computer architecture | deep learning | neural network accelerators | hardware software co-design | DNN benchmarking and characterization | hardware support for machine learningGenre/Form: Electronic books.DDC classification: 006.31 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction -- 1.1 The rises and falls of neural networks -- 1.2 The third wave -- 1.2.1 A virtuous cycle -- 1.3 The role of hardware in deep learning -- 1.3.1 State of the practice --
2. Foundations of deep learning -- 2.1 Neural networks -- 2.1.1 Biological neural networks -- 2.1.2 Artificial neural networks -- 2.1.3 Deep neural networks -- 2.2 Learning -- 2.2.1 Types of learning -- 2.2.2 How deep neural networks learn --
3. Methods and models -- 3.1 An overview of advanced neural network methods -- 3.1.1 Model architectures -- 3.1.2 Specialized layers -- 3.2 Reference workloads for modern deep learning -- 3.2.1 Criteria for a deep learning workload suite -- 3.2.2 The fathom workloads -- 3.3 Computational intuition behind deep learning -- 3.3.1 Measurement and analysis in a deep learning framework -- 3.3.2 Operation type profiling -- 3.3.3 Performance similarity -- 3.3.4 Training and inference -- 3.3.5 Parallelism and operation balance --
4. Neural network accelerator optimization: a case study -- 4.1 Neural networks and the simplicity wall -- 4.1.1 Beyond the wall: bounding unsafe optimizations -- 4.2 Minerva: a three-pronged approach -- 4.3 Establishing a baseline: safe optimizations -- 4.3.1 Training space exploration -- 4.3.2 Accelerator design space -- 4.4 Low-power neural network accelerators: unsafe optimizations -- 4.4.1 Data type quantization -- 4.4.2 Selective operation pruning -- 4.4.3 SRAM fault mitigation -- 4.5 Discussion -- 4.6 Looking forward --
5. A literature survey and review -- 5.1 Introduction -- 5.2 Taxonomy -- 5.3 Algorithms -- 5.3.1 Data types -- 5.3.2 Model sparsity -- 5.4 Architecture -- 5.4.1 Model sparsity -- 5.4.2 Model support -- 5.4.3 Data movement -- 5.5 Circuits -- 5.5.1 Data movement -- 5.5.2 Fault tolerance --
6. Conclusion -- Bibliography -- Authors' biographies.
Abstract: Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE780
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 91-106).

1. Introduction -- 1.1 The rises and falls of neural networks -- 1.2 The third wave -- 1.2.1 A virtuous cycle -- 1.3 The role of hardware in deep learning -- 1.3.1 State of the practice --

2. Foundations of deep learning -- 2.1 Neural networks -- 2.1.1 Biological neural networks -- 2.1.2 Artificial neural networks -- 2.1.3 Deep neural networks -- 2.2 Learning -- 2.2.1 Types of learning -- 2.2.2 How deep neural networks learn --

3. Methods and models -- 3.1 An overview of advanced neural network methods -- 3.1.1 Model architectures -- 3.1.2 Specialized layers -- 3.2 Reference workloads for modern deep learning -- 3.2.1 Criteria for a deep learning workload suite -- 3.2.2 The fathom workloads -- 3.3 Computational intuition behind deep learning -- 3.3.1 Measurement and analysis in a deep learning framework -- 3.3.2 Operation type profiling -- 3.3.3 Performance similarity -- 3.3.4 Training and inference -- 3.3.5 Parallelism and operation balance --

4. Neural network accelerator optimization: a case study -- 4.1 Neural networks and the simplicity wall -- 4.1.1 Beyond the wall: bounding unsafe optimizations -- 4.2 Minerva: a three-pronged approach -- 4.3 Establishing a baseline: safe optimizations -- 4.3.1 Training space exploration -- 4.3.2 Accelerator design space -- 4.4 Low-power neural network accelerators: unsafe optimizations -- 4.4.1 Data type quantization -- 4.4.2 Selective operation pruning -- 4.4.3 SRAM fault mitigation -- 4.5 Discussion -- 4.6 Looking forward --

5. A literature survey and review -- 5.1 Introduction -- 5.2 Taxonomy -- 5.3 Algorithms -- 5.3.1 Data types -- 5.3.2 Model sparsity -- 5.4 Architecture -- 5.4.1 Model sparsity -- 5.4.2 Model support -- 5.4.3 Data movement -- 5.5 Circuits -- 5.5.1 Data movement -- 5.5.2 Fault tolerance --

6. Conclusion -- Bibliography -- Authors' biographies.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.

Also available in print.

Title from PDF title page (viewed on August 23, 2017).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha