Hierarchical Scheduling in Parallel and Cluster Systems

Pirmais vāks
Springer Science & Business Media, 2003. gada 30. jūn. - 251 lappuses
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space.

No grāmatas satura

Saturs

INTRODUCTION
3
12 Parallel Architectures
4
121 SIMD Systems
5
122 MIMD Systems
6
13 Job Scheduling
8
14 Software Architectures
10
15 Overview of the Monograph
11
PARALLEL AND CLUSTER SYSTEMS
13
53 Performance of Task Scheduling Policies
126
532 Results and Discussion
131
5321 Principal Comparison
132
5322 Impact of Variance in Task Service Time
133
5323 Impact of Variance in Task Distribution
134
5324 Effect of Window Size
135
5325 Sensitivity to Other Parameters
137
54 Conclusions
138

22 Parallel Architectures
15
222 NUMA Systems
16
224 Distributed Shared Memory
18
23 Example Parallel Systems
19
232 Stanford DASH System
21
233 ASCI Systems
22
24 Interconnection Networks
24
241 Dynamic Interconnection Networks
26
242 Static Interconnection Networks
29
25 Interprocess Communication
36
252 MPI
40
253 TreadMarks
43
26 Cluster Systems
45
261 Beowulf
46
27 Summary
48
PARALLEL JOB SCHEDULING
49
32 Parallel Program Structures
51
322 DivideandConquer Programs
52
323 Matrix Factorization Programs
53
33 Task Queue Organizations
55
3311 Improving Centralized Organization
57
3312 Improving Distributed Organization
59
34 Scheduling Policies
63
3412 Dynamic Policies
64
342 An Example SpaceSharing Policy
65
3421 Adaptive SpaceSharing Policy
66
3422 A Modification
67
3424 Performance Comparison
68
3425 Performance Comparison
69
3426 Handling Heterogeneity
75
343 TimeSharing Policies
78
344 Hybrid Policies
80
35 Example Policies
81
352 ASCI BluePacific
82
353 Portable Batch System
83
36 Summary
84
HIERARCHICAL TASK QUEUE ORGANIZATION
85
HIERARCHICAL TASK QUEUE ORGANIZATION
87
42 Hierarchical Organization2
89
43 Workload and System Models
93
44 Performance Analysis
96
442 Utilization Analysis
97
4421 Centralized Organization
98
4423 Hierarchical Organization
99
4432 Distributed Organization
100
45 Performance Comparison
101
451 Impact of Access Contention
102
452 Effect of Number of Tasks
104
453 Sensitivity to Service Time Variance
107
454 Impact of System Size
109
455 Influence of Branching and Transfer Factors
111
46 Performance of Dynamic Task Removal Policies
114
47 Summary
117
PERFORMANCE OF SCHEDULING POLICIES
121
52 Performance of Job Scheduling Policies
122
522 Results
123
5222 Sensitivity to Task Service Time Variance
124
5223 Sensitivity to Variance in Task Distribution
125
PERFORMANCE WITH SYNCHRONIZATION WORKLOADS
141
62 Related Work
142
63 System and Workload Models
145
64 Spinning and Blocking Policies
147
642 Blocking Policies
148
651 Workload Model
149
6521 Principal Comparison
150
6522 Sensitivity to Service Time Variance
153
6523 Impact of Granularity
154
6524 Impact of Queue Access Time
155
66 Barrier Synchronization Workload Results
156
662 Simulation Results
157
6622 Sensitivity to Service Time Variance
160
6624 Impact of Queue Access Time
161
67 Cache Effects
162
68 Summary
163
HIERARCHICAL SCHEDULING POLICIES
165
SCHEDULING IN SHAREDMEMORY MULTIPROCESSORS
167
72 SpaceSharing and TimeSharing Policies2
168
722 Modified RRJob
170
74 Performance Evaluation
174
742 Performance Analysis
176
7421 Effect of Scheduling Overhead
178
7422 Impact of Variance in Service Demand
181
7423 Effect of Task Granularity
183
7424 Effect of the ERF Factor
184
7425 Effect of Quantum Size
186
75 Performance with Lock Accessing Workload
187
752 Results
188
76 Conclusions
190
SCHEDULING IN DISTRIBUTEDMEMORY MULTICOMPUTERS
193
82 Hierarchical Scheduling Policy2
195
83 Scheduling Policies for Performance Comparison
200
84 Workload Model
201
85 Performance Comparison
203
852 Performance with NonUniform Workload
204
8521 Performance with 5050 distribution
205
8522 Sensitivity to variance in job service demand
206
8523 Performance under 5025 distribution
208
8524 Performance under 5075 distribution
209
853 Discussion
210
86 Conclusions
211
SCHEDULING IN CLUSTER SYSTEMS
213
92 Hierarchical Scheduling Policy
215
921 Job Placement Policy
216
922 Dynamic Load Balancing Algorithm
218
93 SpaceSharing and TimeSharing Policies
220
931 SpaceSharing Policy
221
94 Performance Comparison
222
941 Workload Model
224
943 NonUniform Workload Results
227
95 Summary
229
EPILOG
231
CONCLUSIONS
233
102 Concluding Remarks
236
References
239
Index
249
Autortiesības

Citi izdevumi - Skatīt visu

Bieži izmantoti vārdi un frāzes

Populāri fragmenti

239. lappuse - Hierarchical Interconnection Networks for Multicomputer Systems," IEEE Transactions on Computers, Vol.

Bibliogrāfiskā informācija