Loading AI tools
Set of related approximation algorithms for the bin packing problem From Wikipedia, the free encyclopedia
The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem.[1] The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
The KK algorithms were considered a breakthrough in the study of bin packing: the previously-known algorithms found multiplicative approximation, where the number of bins was at most for some constants , or at most .[2] The KK algorithms were the first ones to attain an additive approximation.
The input to a bin-packing problem is a set of items of different sizes, a1,...an. The following notation is used:
Given an instance I, we denote:
Obviously, FOPT(I) ≤ OPT(I).
The KK algorithms essentially solve the configuration linear program:
.
Here, A is a matrix with m rows. Each column of A represents a feasible configuration - a multiset of item-sizes, such that the sum of all these sizes is at most B. The set of configurations is C. x is a vector of size C. Each element xc of x represents the number of times configuration c is used.
There are two main difficulties in solving this problem. First, it is an integer linear program, which is computationally hard to solve. Second, the number of variables is C - the number of configurations, which may be enormous. The KK algorithms cope with these difficulties using several techniques, some of which were already introduced by de-la-Vega and Lueker.[2] Here is a high-level description of the algorithm (where is the original instance):
Below, we describe each of these steps in turn.
The motivation for removing small items is that, when all items are large, the number of items in each bin must be small, so the number of possible configurations is (relatively) small. We pick some constant , and remove from the original instance all items smaller than . Let be the resulting instance. Note that in , each bin can contain at most items. We pack and get a packing with some bins.
Now, we add the small items into the existing bins in an arbitrary order, as long as there is room. When there is no more room in the existing bins, we open a new bin (as in next-fit bin packing). Let be the number of bins in the final packing. Then:
.
Proof. If no new bins are opened, then the number of bins remains . If a new bin is opened, then all bins except maybe the last one contain a total size of at least , so the total instance size is at least . Therefore, , so the optimal solution needs at least bins. So . In particular, by taking g=1/n, we get:
,
since . Therefore, it is common to assume that all items are larger than 1/n.[4]
The motivation for grouping items is to reduce the number of different item sizes, to reduce the number of constraints in the configuration LP. The general grouping process is:
There are several different grouping methods.
Let be an integer parameter. Put the largest items in group 1; the next-largest items in group 2; and so on (the last group might have fewer than items). Let be the original instance. Let be the first group (the group of the largest items), and the grouped instance without the first group. Then:
Therefore, . Indeed, given a solution to with bins, we can get a solution to with at most bins.
Let be an integer parameter. Geometric grouping proceeds in two steps:
Then, the number of different sizes is bounded as follows:
The number of bins is bounded as follows:
Let be an integer parameter. Order the items by descending size. Partition them into groups such that the total size in each group is at least . Since the size of each item is less than B, The number of items in each group is at least . The number of items in each group is weakly-increasing. If all items are larger than , then the number of items in each group is at most . In each group, only the larger items are rounded up. This can be done such that:
We consider the configuration linear program without the integrality constraints:
.
Here, we are allowed to use a fractional number of each configuration.
Denote the optimal solution of the linear program by LOPT. The following relations are obvious:
A solution to the fractional LP can be rounded to an integral solution as follows.
Suppose we have a solution x to the fractional LP. We round x into a solution for the integral ILP as follows.
This also implies that .
The main challenge in solving the fractional LP is that it may have a huge number of variables - a variable for each possible configuration.
The dual linear program of the fractional LP is:
.
It has m variables , and C constraints - a constraint for each configuration. It has the following economic interpretation. For each size s, we should determine a nonnegative price . Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1. This LP now has only m variables, but a huge number of constraints. Even listing all the constraints is infeasible.
Fortunately, it is possible to solve the problem up to any given precision without listing all the constraints, by using a variant of the ellipsoid method. This variant gets as input, a separation oracle: a function that, given a vector y ≥ 0, returns one of the following two options:
The ellipsoid method starts with a large ellipsoid, that contains the entire feasible domain . At each step t, it takes the center of the current ellipsoid, and sends it to the separation oracle:
After making a cut, we construct a new, smaller ellipsoid. It can be shown that this process converges to an approximate solution, in time polynomial in the required accuracy.
We are given some m non-negative numbers . We have to decide between the following two options:
This problem can be solved by solving a knapsack problem, where the item values are , the item weights are , and the weight capacity is B (the bin size).
The knapsack problem can be solved by dynamic programming in pseudo-polynomial time: , where m is the number of inputs and V is the number of different possible values. To get a polynomial-time algorithm, we can solve the knapsack problem approximately, using input rounding. Suppose we want a solution with tolerance . We can round each of down to the nearest multiple of /n. Then, the number of possible values between 0 and 1 is n/, and the run-time is . The solution is at least the optimal solution minus /n.
The ellipsoid method should be adapted to use an approximate separation oracle. Given the current ellipsoid center :
Using the approximate separation oracle gives a feasible solution y* to the dual LP, with , after at most iterations, where . The total run-time of the ellipsoid method with the approximate separation oracle is .
During the ellipsoid method, we use at most Q constraints of the form . All the other constraints can be eliminated, since they have no effect on the outcome y* of the ellipsoid method. We can eliminate even more constraints. It is known that, in any LP with m variables, there is a set of m constraints that is sufficient for determining the optimal solution (that is, the optimal value is the same even if only these m constraints are used). We can repeatedly run the ellipsoid method as above, each time trying to remove a specific set of constraints. If the resulting error is at most , then we remove these constraints permanently. It can be shown that we need at most eliminations, so the accumulating error is at most . If we try sets of constraints deterministically, then in the worst case, one out of m trials succeeds, so we need to run the ellipsoid method at most times. If we choose the constraints to remove at random, then the expected number of iterations is .
Finally, we have a reduced dual LP, with only m variables and m constraints. The optimal value of the reduced LP is at least , where .
By the LP duality theorem, the minimum value of the primal LP equals the maximum value of the dual LP, which we denoted by LOPT. Once we have a reduced dual LP, we take its dual, and take a reduced primal LP. This LP has only m variables - corresponding to only m out of C configurations. The maximum value of the reduced dual LP is at least . It can be shown[clarification needed] that the optimal solution of the reduced primal LP is at most . The solution gives a near-optimal bin packing, using at most m configurations.
The total run-time of the deterministic algorithm, when all items are larger than , is:
,
The expected total run-time of the randomized algorithm is: .
Karmarkar and Karp presented three algorithms, that use the above techniques with different parameters. The run-time of all these algorithms depends on a function , which is a polynomial function describing the time it takes to solve the fractional LP with tolerance h=1, which is, for the deterministic version,.
Let be a constant representing the desired approximation accuracy.
All in all, the number of bins is in and the run-time is in . By choosing we get .
Let be a real parameter and an integer parameter to be determined later.
The run-time is in .
Now, if we choose k=2 and g=1/FOPT(I), we get:
,
and hence:
,
so the total number of bins is in . The run-time is .
The same algorithm can be used with different parameters to trade-off run-time with accuracy. For some parameter , choose and . Then, the packing needs at most bins, and the run-time is in .
The third algorithm is useful when the number of sizes m is small (see also high-multiplicity bin packing).
It uses at most bins, and the run-time is in .
The KK techniques were improved later, to provide even better approximations.
Rothvoss[4] uses the same scheme as Algorithm 2, but with a different rounding procedure in Step 2. He introduced a "gluing" step, in which small items are glued together to yield a single larger item. This gluing can be used to increase the smallest item size to about . When all sizes are at least , we can substitute in the guarantee of Algorithm 2, and get:
,
which yields a bins.
Hoberg and Rothvoss[5] use a similar scheme in which the items are first packed into "containers", and then the containers are packed into bins. Their algorithm needs at most bins.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.