dc.description.abstract | In many optimization problems, similar linear programming (LP) problems occur in the nodes of the branch and bound trees that are used to solve integer (mixed or pure, deterministic or stochastic) programming problems. Similar LP problems are also found in problem domains where the objective function and constraint coefficients vary due to uncertainties in the operating conditions. In this report, we present a regression technique for learning a set of functions that map the objective function and the constraints to the decision variables of such an LP system by modifying boosting trees, an algorithm we term the Boost-LP algorithm. Matrix transformations and geometric properties of boosting trees are utilized to provide theoretical performance guarantees on the predicted values. The standard form of the loss function is altered to reduce the possibility of generating infeasible LP solutions. Experimental results on three different problems, one each on scheduling, routing, and planning respectively, demonstrate the effectiveness of the Boost-LP algorithm in providing significant computational benefits over regular optimization solvers without generating solutions that deviate appreciably from the optimum values. | en_US |