Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Automatic parallelization : an overview of fundamental compiler techniques /

By: Midkif, Samuel P.
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures in computer architecture: # 19.Publisher: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, c2012Description: 1 electronic text (xii, 157 p.) : ill., digital file.ISBN: 9781608458424 (electronic bk.).Subject(s): Parallelizing compilers | compilers | automatic parallelization | data dependence analysis | data flow analysis | intermediate representations | transformations | optimization | shared memory | distributed memoryDDC classification: 004.35 Online resources: Abstract with links to resource Also available in print.
Contents:
1. Introduction and overview -- 1.1 Parallelism and independence -- 1.2 Parallel execution -- 1.2.1 Shared and distributed memory parallelism -- 1.2.2 Structures of parallel computation -- 1.3 Compiler fundamentals -- 1.3.1 Compiler phases -- 1.3.2 Parsing -- 1.3.3 Intermediate representations -- 1.4 Compiler support for parallel machines --
2. Dependence analysis, dependence graphs and alias analysis -- 2.1 Dataflow analysis -- 2.1.1 Constant propagation -- 2.1.2 Alias analysis -- 2.2 Abstract interpretation -- 2.3 Data dependence analysis -- 2.3.1 Determining references to test for dependence -- 2.3.2 Testing for dependence -- 2.3.3 Arrays of arrays and dependence analysis -- 2.4 Control dependence -- 2.5 Use-def chains and dependence -- 2.6 Dependence analysis in parallel programs --
3. Program parallelization -- 3.1 Simple loop parallelization -- 3.2 Parallelizing loops with acyclic and cyclic dependence graphs -- 3.3 Targeting vector hardware -- 3.4 Parallelizing loops using producer/consumer synchronization -- 3.4.1 Producer and consumer synchronization -- 3.4.2 Optimizing producer/consumer synchronization -- 3.5 Parallelizing recursive constructs -- 3.6 Parallelization of while loops -- 3.6.1 Determining iterations to be executed by a thread -- 3.6.2 Dealing with the effects of speculation -- 3.7 Software pipelining for instruction level parallelism --
4. Transformations to modify and eliminate dependences -- 4.1 Loop peeling and splitting -- 4.2 Loop skewing -- 4.3 Induction variable substitution -- 4.4 Forward substitution -- 4.5 Scalar expansion and privatization -- 4.6 Array privatization -- 4.7 Node splitting -- 4.8 Reduction recognition -- 4.9 Which transformations are most important? --
5. Transformation of iterative and recursive constructs -- 5.1 Loop blocking or strip mining -- 5.2 Loop unrolling -- 5.3 Loop fusion and fission -- 5.4 Loop reversal -- 5.5 Loop interchange -- 5.6 Tiling -- 5.7 Unimodular transformations --
6. Compiling for distributed memory machines -- 6.1 Data distribution -- 6.1.1 Replicated distribution -- 6.1.2 Block distribution -- 6.1.3 Cyclic distribution -- 6.1.4 Block-cyclic distribution -- 6.2 Computing the reference set -- 6.3 Computation partitioning -- 6.3.1 Bounds of replicated array dimensions -- 6.3.2 Bounds of block distributed array dimensions -- 6.3.3 Bounds of cyclically distributed array dimensions -- 6.3.4 Bounds of block-cyclically distributed array dimensions -- 6.3.5 Subscripts with multiple loop indices -- 6.3.6 Generating loop bounds for multi-dimensional arrays -- 6.3.7 Generating loop bounds with multiple references -- 6.4 Generating communication -- 6.4.1 The shift communication pattern -- 6.4.2 The broadcast communication pattern -- 6.5 Distributed memory programming languages -- 6.5.1 High performance Fortran (HPF) -- 6.5.2 Co-array Fortran -- 6.5.3 Unified parallel C (UPC) --
7. Solving diophantine equations -- 7.1 Solving single diophantine equations -- 7.2 Solving multiple diophantine equations -- 7.3 Extreme values of integer functions --
8. A guide to further reading -- 8.1 Compiler fundamentals -- 8.2 Dependence analysis, dependence graphs and alias analysis -- 8.3 Program parallelization -- 8.4 Transformations to modify and eliminate dependences -- 8.5 Reduction recognition -- 8.6 Transformation of iterative and recursive constructs -- 8.7 Tiling -- 8.8 Compiling for distributed memory machines -- 8.9 Current and future directions in parallelizing compilers -- Author's biography.
Abstract: Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism.We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and discuss transformations that expose parallelism to target shared memory multicore and vector processors. We then discuss some problems that arise when parallelizing programs for execution on distributed memory machines. Finally, we conclude with an overview of solving Diophantine equations and suggestions for further readings in the topics of this book to enable the interested reader to delve deeper into the field.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE394
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 133-156).

1. Introduction and overview -- 1.1 Parallelism and independence -- 1.2 Parallel execution -- 1.2.1 Shared and distributed memory parallelism -- 1.2.2 Structures of parallel computation -- 1.3 Compiler fundamentals -- 1.3.1 Compiler phases -- 1.3.2 Parsing -- 1.3.3 Intermediate representations -- 1.4 Compiler support for parallel machines --

2. Dependence analysis, dependence graphs and alias analysis -- 2.1 Dataflow analysis -- 2.1.1 Constant propagation -- 2.1.2 Alias analysis -- 2.2 Abstract interpretation -- 2.3 Data dependence analysis -- 2.3.1 Determining references to test for dependence -- 2.3.2 Testing for dependence -- 2.3.3 Arrays of arrays and dependence analysis -- 2.4 Control dependence -- 2.5 Use-def chains and dependence -- 2.6 Dependence analysis in parallel programs --

3. Program parallelization -- 3.1 Simple loop parallelization -- 3.2 Parallelizing loops with acyclic and cyclic dependence graphs -- 3.3 Targeting vector hardware -- 3.4 Parallelizing loops using producer/consumer synchronization -- 3.4.1 Producer and consumer synchronization -- 3.4.2 Optimizing producer/consumer synchronization -- 3.5 Parallelizing recursive constructs -- 3.6 Parallelization of while loops -- 3.6.1 Determining iterations to be executed by a thread -- 3.6.2 Dealing with the effects of speculation -- 3.7 Software pipelining for instruction level parallelism --

4. Transformations to modify and eliminate dependences -- 4.1 Loop peeling and splitting -- 4.2 Loop skewing -- 4.3 Induction variable substitution -- 4.4 Forward substitution -- 4.5 Scalar expansion and privatization -- 4.6 Array privatization -- 4.7 Node splitting -- 4.8 Reduction recognition -- 4.9 Which transformations are most important? --

5. Transformation of iterative and recursive constructs -- 5.1 Loop blocking or strip mining -- 5.2 Loop unrolling -- 5.3 Loop fusion and fission -- 5.4 Loop reversal -- 5.5 Loop interchange -- 5.6 Tiling -- 5.7 Unimodular transformations --

6. Compiling for distributed memory machines -- 6.1 Data distribution -- 6.1.1 Replicated distribution -- 6.1.2 Block distribution -- 6.1.3 Cyclic distribution -- 6.1.4 Block-cyclic distribution -- 6.2 Computing the reference set -- 6.3 Computation partitioning -- 6.3.1 Bounds of replicated array dimensions -- 6.3.2 Bounds of block distributed array dimensions -- 6.3.3 Bounds of cyclically distributed array dimensions -- 6.3.4 Bounds of block-cyclically distributed array dimensions -- 6.3.5 Subscripts with multiple loop indices -- 6.3.6 Generating loop bounds for multi-dimensional arrays -- 6.3.7 Generating loop bounds with multiple references -- 6.4 Generating communication -- 6.4.1 The shift communication pattern -- 6.4.2 The broadcast communication pattern -- 6.5 Distributed memory programming languages -- 6.5.1 High performance Fortran (HPF) -- 6.5.2 Co-array Fortran -- 6.5.3 Unified parallel C (UPC) --

7. Solving diophantine equations -- 7.1 Solving single diophantine equations -- 7.2 Solving multiple diophantine equations -- 7.3 Extreme values of integer functions --

8. A guide to further reading -- 8.1 Compiler fundamentals -- 8.2 Dependence analysis, dependence graphs and alias analysis -- 8.3 Program parallelization -- 8.4 Transformations to modify and eliminate dependences -- 8.5 Reduction recognition -- 8.6 Transformation of iterative and recursive constructs -- 8.7 Tiling -- 8.8 Compiling for distributed memory machines -- 8.9 Current and future directions in parallelizing compilers -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism.We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and discuss transformations that expose parallelism to target shared memory multicore and vector processors. We then discuss some problems that arise when parallelizing programs for execution on distributed memory machines. Finally, we conclude with an overview of solving Diophantine equations and suggestions for further readings in the topics of this book to enable the interested reader to delve deeper into the field.

Also available in print.

Title from PDF t.p. (viewed on February 15, 2012).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha