Handbook of Combinatorial Optimization: Supplement Volume B, 2. sējumsDing-Zhu Du, Panos M. Pardalos Springer Science & Business Media, 2006. gada 18. aug. - 394 lappuses Combinatorial (or discrete) optimization is one of the most active fields in the interface of operations research, computer science, and applied ma- ematics. Combinatorial optimization problems arise in various applications, including communications network design, VLSI design, machine vision, a- line crew scheduling, corporate planning, computer-aided design and m- ufacturing, database query design, cellular telephone frequency assignment, constraint directed reasoning, and computational biology. Furthermore, combinatorial optimization problems occur in many diverse areas such as linear and integer programming, graph theory, artificial intelligence, and number theory. All these problems, when formulated mathematically as the minimization or maximization of a certain function defined on some domain, have a commonality of discreteness. Historically, combinatorial optimization starts with linear programming. Linear programming has an entire range of important applications including production planning and distribution, personnel assignment, finance, allo- tion of economic resources, circuit simulation, and control systems. Leonid Kantorovich and Tjalling Koopmans received the Nobel Prize (1975) for their work on the optimal allocation of resources. Two important discov- ies, the ellipsoid method (1979) and interior point approaches (1984) both provide polynomial time algorithms for linear programming. These al- rithms have had a profound effect in combinatorial optimization. Many polynomial-time solvable combinatorial optimization problems are special cases of linear programming (e.g. matching and maximum flow). In ad- tion, linear programming relaxations are often the basis for many appro- mation algorithms for solving NP-hard problems (e.g. dual heuristics). |
No grāmatas satura
1.5. rezultāts no 49.
2. lappuse
... parameter, which is an upper bound on the difference between the objective value of an optimal solution to the instance and that of a solution returned by the data correcting algorithm. Note that this is 2 D. Ghosh, B. Goldengorin, and ...
... parameter, which is an upper bound on the difference between the objective value of an optimal solution to the instance and that of a solution returned by the data correcting algorithm. Note that this is 2 D. Ghosh, B. Goldengorin, and ...
3. lappuse
... parameter. The discussion here is for a minimization problem; the maximization problem can be dealt with in a similar manner. Let us assume a partition of the domain D. Let us further assume that for each of the sub-domains of D, we are ...
... parameter. The discussion here is for a minimization problem; the maximization problem can be dealt with in a similar manner. Let us assume a partition of the domain D. Let us further assume that for each of the sub-domains of D, we are ...
8. lappuse
... parameter can be compared with a suitably defined distance measure between two cost vectors, (or equivalently, two instances). Consider a subproblem in the tree obtained by normal implicit enumeration. The problem instance that is being ...
... parameter can be compared with a suitably defined distance measure between two cost vectors, (or equivalently, two instances). Consider a subproblem in the tree obtained by normal implicit enumeration. The problem instance that is being ...
15. lappuse
... parameter is set to 0, then the enumeration tree constructed by DC will be the one shown in Figure 3 and evaluates 8 subproblems. However if the value of is set to 1, then enumeration stops after node 4, since the lower bound obtained ...
... parameter is set to 0, then the enumeration tree constructed by DC will be the one shown in Figure 3 and evaluates 8 subproblems. However if the value of is set to 1, then enumeration stops after node 4, since the lower bound obtained ...
27. lappuse
... parameter reduced to If such a correction is not possible, i.e. if exceeds the accuracy parameter, then we branch on a variable to partition the interval [S, T] into two intervals and This branching rule was proposed in Goldengorin [10] ...
... parameter reduced to If such a correction is not possible, i.e. if exceeds the accuracy parameter, then we branch on a variable to partition the interval [S, T] into two intervals and This branching rule was proposed in Goldengorin [10] ...
Saturs
2 | |
5 | |
The Steiner Ratio of BanachMinkowski Space A Survey | 55 |
Probabilistic Verification and NonApproximablity 83 | 82 |
Steiner Trees in Industry Xiuzhen Cheng Yingshu Li DingZhu Du and Hung Q Ngo | 193 |
Networkbased Model and Algorithms in Data Mining | 217 |
The Generalized Assignment Problem and Extensions Dolores Romero Morales and H Edwin Romeijn | 259 |
Additional Approaches to the | 297 |
Concluding Remarks | 304 |
Optimal Rectangular Partitions Xiuzhen Cheng DingZhu Du JoonMo Kim and Lu Ruan 313 | 329 |
Introduction | 330 |
Author Index 371 | 370 |
Subject Index | 381 |
Citi izdevumi - Skatīt visu
Handbook of combinatorial optimization, 2. sējums Dingzhu Du,Panos M. Pardalos Ierobežota priekšskatīšana - 1998 |
Handbook of Combinatorial Optimization: Supplement Volume B Ding-Zhu Du,Panos M. Pardalos Priekšskatījums nav pieejams - 2011 |
Bieži izmantoti vārdi un frāzes
agent applied approximation algorithms approximation scheme Arora assignment problem Banach-Minkowski capacity constraints CDS construction checkable cluster clusterhead combinatorial optimization complexity Computer Science conjecture connected dominating set consider convex corresponding cost data correcting algorithm dataset decoding defined denote distribution edges elements encoding Feige graph product greedy heuristic guillotine Hadamard code Håstad holographic codes independent sets input Journal Lemma length linear lower bound market graph matrix maximum clique minimal minimum spanning tree multi-degree neighbors Neural Networks nodes non-approximability NP complete NP-hard obtain Operations Research optimal solution optimization problems parameters performance ratio polynomial polynomial-time approximation polynomial-time approximation scheme probabilistic problem instances procedure proof prove random rectangular partition rectilinear reduce Romero Morales Section segment solve space SPLP Steiner minimum tree Steiner points Steiner ratio Steiner tree problem subproblems subset tasks techniques Theorem Theory upper bound variables vector verifier vertex vertices WCDS