Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Shared-memory synchronization

By: Scott, Michael Lee 1959-.
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures in computer architecture: # 23.Publisher: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, c2013Description: 1 electronic text (xvii, 203 p.) : ill., digital file.ISBN: 9781608459575 (electronic bk.).Subject(s): Memory management (Computer science) | Distributed shared memory | atomicity | barriers | busy-waiting | conditions | locality | locking | memory models | monitors | multiprocessor architecture | nonblocking algorithms | scheduling | semaphores | synchronization | transactional memoryDDC classification: 005.43 Online resources: Abstract with links to resource | Abstract with links to full text Also available in print.
Contents:
1. Introduction -- 1.1 Atomicity -- 1.2 Condition synchronization -- 1.3 Spinning vs. blocking -- 1.4 Safety and liveness --
2. Architectural background -- 2.1 Cores and caches: basic shared-memory architecture -- 2.1.1 Temporal and spatial locality -- 2.1.2 Cache coherence -- 2.1.3 Processor (core) locality -- 2.2 Memory consistency -- 2.2.1 Sources of inconsistency -- 2.2.2 Special instructions to order memory access -- 2.2.3 Example architectures -- 2.3 Atomic primitives -- 2.3.1 The ABA problem -- 2.3.2 Other synchronization hardware --
3. Essential theory -- 3.1 Safety -- 3.1.1 Deadlock freedom -- 3.1.2 Atomicity -- 3.2 Liveness -- 3.2.1 Nonblocking progress -- 3.2.2 Fairness -- 3.3 The consensus hierarchy -- 3.4 Memory models -- 3.4.1 Formal framework -- 3.4.2 Data races -- 3.4.3 Real-world models --
4. Practical spin locks -- 4.1 Classical load-store only algorithms -- 4.2 Centralized algorithms -- 4.2.1 Test and set locks -- 4.2.2 The ticket lock -- 4.3 Queued spin locks -- 4.3.1 The MCS lock -- 4.3.2 The CLH lock -- 4.3.3 Which spin lock should I use? -- 4.4 Interface extensions -- 4.5 Special-case optimizations -- 4.5.1 Locality-conscious locking -- 4.5.2 Double-checked locking -- 4.5.3 Asymmetric locking --
5. Busy-wait synchronization with conditions -- 5.1 Flags -- 5.2 Barrier algorithms -- 5.2.1 The sense-reversing centralized barrier -- 5.2.2 Software combining -- 5.2.3 The dissemination barrier -- 5.2.4 Non-combining tree barriers -- 5.2.5 Which barrier should I use? -- 5.3 Barrier extensions -- 5.3.1 Fuzzy barriers -- 5.3.2 Adaptive barriers -- 5.3.3 Barrier-like constructs -- 5.4 Combining as a general technique --
6. Read-mostly atomicity -- 6.1 Reader-writer locks -- 6.1.1 Centralized algorithms -- 6.1.2 Queued reader-writer locks -- 6.2 Sequence locks -- 6.3 Read-copy update --
7. Synchronization and scheduling -- 7.1 Scheduling -- 7.2 Semaphores -- 7.3 Monitors -- 7.3.1 Hoare monitors -- 7.3.2 Signal semantics -- 7.3.3 Nested monitor calls -- 7.3.4 Java monitors -- 7.4 Other language mechanisms -- 7.4.1 Conditional critical regions -- 7.4.2 Futures -- 7.4.3 Series-parallel execution -- 7.5 Kernel/user interactions -- 7.5.1 Context switching overhead -- 7.5.2 Preemption and convoys -- 7.5.3 Resource minimization --
8. Nonblocking algorithms -- 8.1 Single-location structures -- 8.2 The Michael and Scott (M&S) queue -- 8.3 Harris and Michael (H&M) lists -- 8.4 Hash tables -- 8.5 Skip lists -- 8.6 Double-ended queues -- 8.6.1 Unbounded lock-free deques -- 8.6.2 Obstruction-free bounded deques -- 8.6.3 Work-stealing queues -- 8.7 Dual data structures -- 8.8 Nonblocking elimination -- 8.9 Universal constructions --
9. Transactional memory -- 9.1 Software TM -- 9.1.1 Dimensions of the STM design space -- 9.1.2 Buffering of speculative state -- 9.1.3 Access tracking and conflict resolution -- 9.1.4 Validation -- 9.1.5 Contention management -- 9.2 Hardware TM -- 9.2.1 Dimensions of the HTM design space -- 9.2.2 Speculative lock elision -- 9.2.3 Hybrid TM -- 9.3 Challenges -- 9.3.1 Semantics -- 9.3.2 Extensions -- 9.3.3 Implementation -- 9.3.4 Debugging and performance tuning --
Bibliography -- Author's biography.
Abstract: Since the advent of time sharing in the 1960s, designers of concurrent and parallel systems have needed to synchronize the activities of threads of control that share data structures in memory. In recent years, the study of synchronization has gained new urgency with the proliferation of multicore processors, on which even relatively simple user-level programs must frequently run in parallel. This lecture offers a comprehensive survey of shared-memory synchronization, with an emphasis on "systems-level" issues. It includes sufficient coverage of architectural details to understand correctness and performance on modern multicore machines, and sufficient coverage of higher-level issues to understand how synchronization is embedded in modern programming languages. The primary intended audience is "systems programmers" - the authors of operating systems, library packages, language run-time systems, concurrent data structures, and server and utility programs. Much of the discussion should also be of interest to application programmers who want to make good use of the synchronization mechanisms available to them, and to computer architects who want to understand the ramifications of their design decisions on systems-level code.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE498
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 173-202).

1. Introduction -- 1.1 Atomicity -- 1.2 Condition synchronization -- 1.3 Spinning vs. blocking -- 1.4 Safety and liveness --

2. Architectural background -- 2.1 Cores and caches: basic shared-memory architecture -- 2.1.1 Temporal and spatial locality -- 2.1.2 Cache coherence -- 2.1.3 Processor (core) locality -- 2.2 Memory consistency -- 2.2.1 Sources of inconsistency -- 2.2.2 Special instructions to order memory access -- 2.2.3 Example architectures -- 2.3 Atomic primitives -- 2.3.1 The ABA problem -- 2.3.2 Other synchronization hardware --

3. Essential theory -- 3.1 Safety -- 3.1.1 Deadlock freedom -- 3.1.2 Atomicity -- 3.2 Liveness -- 3.2.1 Nonblocking progress -- 3.2.2 Fairness -- 3.3 The consensus hierarchy -- 3.4 Memory models -- 3.4.1 Formal framework -- 3.4.2 Data races -- 3.4.3 Real-world models --

4. Practical spin locks -- 4.1 Classical load-store only algorithms -- 4.2 Centralized algorithms -- 4.2.1 Test and set locks -- 4.2.2 The ticket lock -- 4.3 Queued spin locks -- 4.3.1 The MCS lock -- 4.3.2 The CLH lock -- 4.3.3 Which spin lock should I use? -- 4.4 Interface extensions -- 4.5 Special-case optimizations -- 4.5.1 Locality-conscious locking -- 4.5.2 Double-checked locking -- 4.5.3 Asymmetric locking --

5. Busy-wait synchronization with conditions -- 5.1 Flags -- 5.2 Barrier algorithms -- 5.2.1 The sense-reversing centralized barrier -- 5.2.2 Software combining -- 5.2.3 The dissemination barrier -- 5.2.4 Non-combining tree barriers -- 5.2.5 Which barrier should I use? -- 5.3 Barrier extensions -- 5.3.1 Fuzzy barriers -- 5.3.2 Adaptive barriers -- 5.3.3 Barrier-like constructs -- 5.4 Combining as a general technique --

6. Read-mostly atomicity -- 6.1 Reader-writer locks -- 6.1.1 Centralized algorithms -- 6.1.2 Queued reader-writer locks -- 6.2 Sequence locks -- 6.3 Read-copy update --

7. Synchronization and scheduling -- 7.1 Scheduling -- 7.2 Semaphores -- 7.3 Monitors -- 7.3.1 Hoare monitors -- 7.3.2 Signal semantics -- 7.3.3 Nested monitor calls -- 7.3.4 Java monitors -- 7.4 Other language mechanisms -- 7.4.1 Conditional critical regions -- 7.4.2 Futures -- 7.4.3 Series-parallel execution -- 7.5 Kernel/user interactions -- 7.5.1 Context switching overhead -- 7.5.2 Preemption and convoys -- 7.5.3 Resource minimization --

8. Nonblocking algorithms -- 8.1 Single-location structures -- 8.2 The Michael and Scott (M&S) queue -- 8.3 Harris and Michael (H&M) lists -- 8.4 Hash tables -- 8.5 Skip lists -- 8.6 Double-ended queues -- 8.6.1 Unbounded lock-free deques -- 8.6.2 Obstruction-free bounded deques -- 8.6.3 Work-stealing queues -- 8.7 Dual data structures -- 8.8 Nonblocking elimination -- 8.9 Universal constructions --

9. Transactional memory -- 9.1 Software TM -- 9.1.1 Dimensions of the STM design space -- 9.1.2 Buffering of speculative state -- 9.1.3 Access tracking and conflict resolution -- 9.1.4 Validation -- 9.1.5 Contention management -- 9.2 Hardware TM -- 9.2.1 Dimensions of the HTM design space -- 9.2.2 Speculative lock elision -- 9.2.3 Hybrid TM -- 9.3 Challenges -- 9.3.1 Semantics -- 9.3.2 Extensions -- 9.3.3 Implementation -- 9.3.4 Debugging and performance tuning --

Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Since the advent of time sharing in the 1960s, designers of concurrent and parallel systems have needed to synchronize the activities of threads of control that share data structures in memory. In recent years, the study of synchronization has gained new urgency with the proliferation of multicore processors, on which even relatively simple user-level programs must frequently run in parallel. This lecture offers a comprehensive survey of shared-memory synchronization, with an emphasis on "systems-level" issues. It includes sufficient coverage of architectural details to understand correctness and performance on modern multicore machines, and sufficient coverage of higher-level issues to understand how synchronization is embedded in modern programming languages. The primary intended audience is "systems programmers" - the authors of operating systems, library packages, language run-time systems, concurrent data structures, and server and utility programs. Much of the discussion should also be of interest to application programmers who want to make good use of the synchronization mechanisms available to them, and to computer architects who want to understand the ramifications of their design decisions on systems-level code.

Also available in print.

Title from PDF t.p. (viewed on June 15, 2013).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha