Stochastic Combinatorial Optimization with Risk
We consider general combinatorial optimization problems that can be formulated as minimizing the weight of a feasible solution wT x over an arbitrary feasible set. For these problems we describe a broad class of corresponding stochastic problems where the weight vector W has independent random components, unknown at the time of solution. A natural and important objective which incorporates risk in this stochastic setting, is to look for a feasible solution whose stochastic weight has a small tail or a small linear combination of mean and standard deviation. Our models can be equivalently reformulated as deterministic nonconvex programs for which no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient general-purpose approximation schemes. They use as a black-box (exact or approximate) the solution to the underlying deterministic combinatorial problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available ?-approximation algorithm to the deterministic problem, we construct a ?(1 + ?)-approximation algorithm that invokes the deterministic algorithm only a logarithmic number of times in the input and polynomial in 1/?, for any desired accuracy level ? > 0. The algorithms are based on a geometric analysis of the curvature and approximability of the nonlinear level sets of the objective functions.