Hierarchical Scheduling in Parallel and Cluster SystemsSpringer Science & Business Media, 2012. gada 6. dec. - 251 lappuses Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space. |
Saturs
1 | |
PARALLEL AND CLUSTER SYSTEMS | 13 |
PARALLEL JOB SCHEDULING | 49 |
11 | 62 |
13 | 68 |
16 | 74 |
19 | 82 |
HIERARCHICAL TASK QUEUE ORGANIZATION | 87 |
PERFORMANCE OF SCHEDULING POLICIES | 121 |
PERFORMANCE WITH SYNCHRONIZATION | 141 |
21 | 144 |
SCHEDULING IN SHAREDMEMORY | 167 |
SCHEDULING IN DISTRIBUTEDMEMORY | 193 |
SCHEDULING IN CLUSTER SYSTEMS | 213 |
22 | 240 |
Citi izdevumi - Skatīt visu
Hierarchical Scheduling in Parallel and Cluster Systems Sivarama Dandamudi Ierobežota priekšskatīšana - 2003 |
Hierarchical Scheduling in Parallel and Cluster Systems Sivarama Dandamudi Priekšskatījums nav pieejams - 2012 |
Bieži izmantoti vārdi un frāzes
architecture average number blocking policies bottleneck branching factor Cdin central queue centralized organization cluster systems coefficient of variation context switch context switch overhead distributed organization distributed-memory systems example exponentially distributed FCFS policy hierarchical organization hierarchical policy hierarchical scheduling policy hierarchical task queue high system loads hypercube impact implement increases interconnection network job arrival job scheduling job scheduling policies job structure load imbalance load sharing lock accessing workload MAP policy Mean response memory moderate system multiprocessor node number of iterations number of processors number of queue number of tasks parallel systems parent queue partition Performance sensitivity problem quantum queue access root queue round robin policies RR3 policy service time CV service time variance shown in Figure space-sharing policy spinning policy system load system utilization task queue access task queue organization task scheduling policies task transfer three policies time-sharing policies transfer factor Wavg workload model workstations