Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Single-instruction multiple-data execution /

By: Hughes, Christopher J [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures in computer architecture: # 32.Publisher: San Rafael, California (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, 2015.Description: 1 PDF (xv, 105 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627057646.Subject(s): SIMD (Computer architecture) | Parallel file systems (Computer science) | SIMD | vector processor | data parallelism | autovectorization | control divergence | vector masks | unaligned accesses | non-contiguous accesses | gather/scatter | horizontal operations | vector reductions | shuffle | permute | conflict detectionDDC classification: 004.22 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Data parallelism -- 1.1 Data parallelism -- 1.2 Data parallelism in applications -- 1.2.1 Physical simulation -- 1.2.2 Computer vision -- 1.2.3 Speech recognition -- 1.2.4 Database management systems -- 1.2.5 Financial analytics -- 1.2.6 Medical imaging --
2. Exploiting data parallelism with SIMD execution -- 2.1 Exploiting data parallelism -- 2.2 SIMD execution -- 2.3 SIMD performance and energy benefits -- 2.4 Limits to SIMD scaling -- 2.5 Programming and compilation -- 2.5.1 Programming for SIMD execution -- 2.5.2 Challenges of static analysis --
3. Computation and control flow -- 3.1 SIMD registers -- 3.2 SIMD computation -- 3.2.1 Basic arithmetic and logic -- 3.2.2 Data element size and overflow -- 3.2.3 Advanced arithmetic -- 3.3 Control flow -- 3.3.1 SIMD execution with control flow -- 3.3.2 Conditional SIMD execution -- 3.3.3 Efficiency implications of control divergence --
4. Memory operations -- 4.1 Contiguous patterns -- 4.1.1 Unaligned accesses -- 4.1.2 Throughput implications -- 4.2 Non-contiguous patterns -- 4.2.1 Programming model issues -- 4.2.2 Implementing gather and scatter instructions -- 4.2.3 Locality in gathers and scatters --
5. Horizontal operations -- 5.1 Limits to horizontal operations -- 5.2 Data movement -- 5.3 Reductions -- 5.4 Reducing control divergence -- 5.5 Potential dependences -- 5.5.1 Single-index case -- 5.5.2 Multi-index case --
6. Conclusions -- 6.1 Future directions -- Bibliography -- Author's biography.
Abstract: Having hit power limitations to even more aggressive out-of-order execution in processor cores, many architects in the past decade have turned to single-instruction-multiple-data (SIMD) execution to increase single-threaded performance. SIMD execution, or having a single instruction drive execution of an identical operation on multiple data items, was already well established as a technique to efficiently exploit data parallelism. Furthermore, support for it was already included in many commodity processors. However, in the past decade, SIMD execution has seen a dramatic increase in the set of applications using it, which has motivated big improvements in hardware support in mainstream microprocessors. The easiest way to provide a big performance boost to SIMD hardware is to make it wider. i.e., increase the number of data items hardware operates on simultaneously. Indeed, microprocessor vendors have done this. However, as we exploit more data parallelism in applications, certain challenges can negatively impact performance. In particular, conditional execution, noncontiguous memory accesses, and the presence of some dependences across data items are key roadblocks to achieving peak performance with SIMD execution. This book first describes data parallelism, and why it is so common in popular applications. We then describe SIMD execution, and explain where its performance and energy benefits come from compared to other techniques to exploit parallelism. Finally, we describe SIMD hardware support in current commodity microprocessors. This includes both expected design tradeoffs, as well as unexpected ones, as we work to overcome challenges encountered when trying to map real software to SIMD execution.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE640
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 95-103).

1. Data parallelism -- 1.1 Data parallelism -- 1.2 Data parallelism in applications -- 1.2.1 Physical simulation -- 1.2.2 Computer vision -- 1.2.3 Speech recognition -- 1.2.4 Database management systems -- 1.2.5 Financial analytics -- 1.2.6 Medical imaging --

2. Exploiting data parallelism with SIMD execution -- 2.1 Exploiting data parallelism -- 2.2 SIMD execution -- 2.3 SIMD performance and energy benefits -- 2.4 Limits to SIMD scaling -- 2.5 Programming and compilation -- 2.5.1 Programming for SIMD execution -- 2.5.2 Challenges of static analysis --

3. Computation and control flow -- 3.1 SIMD registers -- 3.2 SIMD computation -- 3.2.1 Basic arithmetic and logic -- 3.2.2 Data element size and overflow -- 3.2.3 Advanced arithmetic -- 3.3 Control flow -- 3.3.1 SIMD execution with control flow -- 3.3.2 Conditional SIMD execution -- 3.3.3 Efficiency implications of control divergence --

4. Memory operations -- 4.1 Contiguous patterns -- 4.1.1 Unaligned accesses -- 4.1.2 Throughput implications -- 4.2 Non-contiguous patterns -- 4.2.1 Programming model issues -- 4.2.2 Implementing gather and scatter instructions -- 4.2.3 Locality in gathers and scatters --

5. Horizontal operations -- 5.1 Limits to horizontal operations -- 5.2 Data movement -- 5.3 Reductions -- 5.4 Reducing control divergence -- 5.5 Potential dependences -- 5.5.1 Single-index case -- 5.5.2 Multi-index case --

6. Conclusions -- 6.1 Future directions -- Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Having hit power limitations to even more aggressive out-of-order execution in processor cores, many architects in the past decade have turned to single-instruction-multiple-data (SIMD) execution to increase single-threaded performance. SIMD execution, or having a single instruction drive execution of an identical operation on multiple data items, was already well established as a technique to efficiently exploit data parallelism. Furthermore, support for it was already included in many commodity processors. However, in the past decade, SIMD execution has seen a dramatic increase in the set of applications using it, which has motivated big improvements in hardware support in mainstream microprocessors. The easiest way to provide a big performance boost to SIMD hardware is to make it wider. i.e., increase the number of data items hardware operates on simultaneously. Indeed, microprocessor vendors have done this. However, as we exploit more data parallelism in applications, certain challenges can negatively impact performance. In particular, conditional execution, noncontiguous memory accesses, and the presence of some dependences across data items are key roadblocks to achieving peak performance with SIMD execution. This book first describes data parallelism, and why it is so common in popular applications. We then describe SIMD execution, and explain where its performance and energy benefits come from compared to other techniques to exploit parallelism. Finally, we describe SIMD hardware support in current commodity microprocessors. This includes both expected design tradeoffs, as well as unexpected ones, as we work to overcome challenges encountered when trying to map real software to SIMD execution.

Also available in print.

Title from PDF title page (viewed on June 20, 2015).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha