Parallel Processing of Discrete Optimization Problems: DIMACS Workshop, April 28-29, 1994Panos M. Pardalos, Mauricio G. C. Resende, K. G. Ramakrishnan American Mathematical Soc., 1995. gada 1. janv. - 374 lappuses This book contains papers presented at the Workshop on Parallel Processing of Discrete Optimization Problems held at DIMACS in April 1994. The contents cover a wide spectrum of the most recent algorithms and applications in parallel processing of discrete optimization and related problems. Topics include parallel branch and bound algorithms, scalability, load balancing, parallelism and irregular data structures and scheduling task graphs on parallel machines. Applications include parallel algorithms for solving satisfiability problems, location problems, linear programming, quadratic and linear assignment problems. This book would be suitable as a textbook in advanced courses on parallel algorithms and combinatorial optimization. |
Saturs
A Note on Parallel Randomized Algorithms for Searching Problems | 33 |
Modeling Parallel BranchandBound for Asynchronous Implementations | 45 |
A Data Parallel Space Dilation Algorithm for the Concentrator Location | 57 |
A Multistage Approach for Scheduling Task Graphs on Parallel Machines | 81 |
Parallel Algorithms for Satisfiability SAT Problem | 105 |
Experiences with a Parallel Formulation of an Interior Point Algorithm | 163 |
Experiences with a Synchronous Parallel Branch and Bound Algorithm | 181 |
New Anticipatory Load Balancing Strategies for Parallel A Algorithms | 197 |
A Parallel Algorithm for Computing all Homomorphisms of Deterministic | 233 |
Query Optimization and Processing in Parallel Databases | 259 |
Scheduling Acyclic Task Graphs on Distributed Memory Parallel | 289 |
Scalability of Massively Parallel DepthFirst Search | 305 |
On Irregular Data Structures and Asynchronous Parallel Branch and Bound | 323 |
Parallel Algorithms for the Assignment ProblemAn Experimental | 337 |
Serial Parallel Algorithms for QSP Problems | 353 |
Citi izdevumi - Skatīt visu
Parallel Processing of Discrete Optimization Problems: DIMACS Workshop ... Panos M. Pardalos,Mauricio G. C. Resende Ierobežota priekšskatīšana - 1995 |
Bieži izmantoti vārdi un frāzes
alternate pair architecture assignment problem asynchronous augmenting path automata autonomous factor backtracking balancing network block-step Boolean Branch and Bound c₂ Cdown Cholesky factorization clusters combinatorial complexity component of autonomous Computer Science connected component constraints counting network cycle data structure depth-first search Discrete dual efficiency essential nodes evaluation execution Figure Fortran global optimization heuristic homomorphism hypercube IEEE implementation initial input integer iteration KLND linear programming load balancing local search lower bound mapping Mathematics matrix memory method minimal multiprocessor neighbors NP-complete number of processors objective function obtained optimal solution overhead parallel algorithm parallel computer parallel processing parameters performance pipeline polynomial procedure Proposition PYRROS QE strategy qualitative load balancing query relaxation requests SAT problem schedule scheme search algorithms Section sequential algorithm solving speedup startup phase step subgradient subproblems subtree task graph termination Theorem tuple update variables vector
Populāri fragmenti
147. lappuse - The solution to the SAT problem corresponds to a set of global minimum points of the object function.
46. lappuse - Conceptually, each operation locks the entire object it is applied to and releases the lock only when it is finished. To be more precise, the model guarantees serializability [Eswaran et al. 1976] of operation invocations: if two operations are applied simultaneously to the same data-object, then the result is as if one of them is executed before the other; the order of invocation, however, is nondeterministic. An implementation of the model need not actually execute all operations one by one. To...
32. lappuse - TE Anderson. The performance of spin lock alternatives for shared-memory multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 1(1):6-16, Jan.
153. lappuse - A'r represents the set of output neurons. The connection weights and coupling coefficients are updated step by step proportional to the opposite direction of the error gradient respectively (13.5) - - -9EW (t3.6) ,k where i?I and n2 are the learning rates.
81. lappuse - The content of the information herein does not necessarily reflect the position of the Government and official endorsement should not be inferred.
92. lappuse - Fig. 16(c) an ordering algorithm is needed to order the tasks in the nonlinear cluster and then the parallel time must be computed to get the new DS. One of the key ideas in the DSC algorithm is that it computes the schedule and parallel time incrementally from one step to the next in O(logv) time. Thus the total complexity is O((v + e)logt>).