35
Annals of Operations Research manuscript No. (will be inserted by the editor) Distribution-dependent robust linear optimization with applications to inventory control Seong-Cheol Kang · Theodora S. Brisimi · Ioannis Ch. Paschalidis the date of receipt and acceptance should be inserted later Abstract This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this prob- lem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain prob- abilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional infor- mation, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the for- mulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. Research partially supported by the NSF under grants EFRI-0735974, CNS-1239021, IIS- 1237022, by the DOE under grant DE-FG52-06NA27490, by the ARO under grants W911NF- 11-1-0227 and W911NF-12-1-0390, and by the ONR under grant N00014-10-1-0952. Seong-Cheol Kang The Korea Transport Institute, Goyang-si, Korea, e-mail: [email protected] Theodora S. Brisimi Department of Electrical & Computer Eng., Boston University, Boston, MA 02215, USA, e- mail: [email protected] Ioannis Ch. Paschalidis Corresponding author. Department of Electrical & Computer Eng. and Division of Sys- tems Eng., Boston University, Boston, MA 02215, USA, e-mail: [email protected], url: http://ionia.bu.edu/

Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Annals of Operations Research manuscript No.(will be inserted by the editor)

Distribution-dependent robust linear optimization with

applications to inventory control

Seong-Cheol Kang · Theodora S. Brisimi ·

Ioannis Ch. Paschalidis

the date of receipt and acceptance should be inserted later

Abstract This paper tackles linear programming problems with data uncertaintyand applies it to an important inventory control problem. Each element of theconstraint matrix is subject to uncertainty and is modeled as a random variablewith a bounded support. The classical robust optimization approach to this prob-lem yields a solution with guaranteed feasibility. As this approach tends to betoo conservative when applications can tolerate a small chance of infeasibility, onewould be interested in obtaining a less conservative solution with a certain prob-abilistic guarantee of feasibility. A robust formulation in the literature producessuch a solution, but it does not use any distributional information on the uncertaindata. In this work, we show that the use of distributional information leads to anequally robust solution (i.e., under the same probabilistic guarantee of feasibility)but with a better objective value. In particular, by exploiting distributional infor-mation, we establish stronger upper bounds on the constraint violation probabilityof a solution. These bounds enable us to “inject” less conservatism into the for-mulation, which in turn yields a more cost-effective solution (by 50% or more insome numerical instances). To illustrate the effectiveness of our methodology, weconsider a discrete-time stochastic inventory control problem with certain qualityof service constraints. Numerical tests demonstrate that the use of distributionalinformation in the robust optimization of the inventory control problem results in36%–54% cost savings, compared to the case where such information is not used.

Research partially supported by the NSF under grants EFRI-0735974, CNS-1239021, IIS-1237022, by the DOE under grant DE-FG52-06NA27490, by the ARO under grants W911NF-11-1-0227 and W911NF-12-1-0390, and by the ONR under grant N00014-10-1-0952.

Seong-Cheol KangThe Korea Transport Institute, Goyang-si, Korea, e-mail: [email protected]

Theodora S. BrisimiDepartment of Electrical & Computer Eng., Boston University, Boston, MA 02215, USA, e-mail: [email protected]

Ioannis Ch. PaschalidisCorresponding author. Department of Electrical & Computer Eng. and Division of Sys-tems Eng., Boston University, Boston, MA 02215, USA, e-mail: [email protected], url:http://ionia.bu.edu/

Page 2: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

2 Seong-Cheol Kang et al.

Keywords Robust optimization, Linear programming, Data uncertainty,Inventory Control, Quality-of-Service.

1 Introduction

Linear programming (LP) is a ubiquitous tool in manufacturing systems and serviceoperations. Theory and methods are very mature, there are many excellent solversto choose from, and LPs can be solved in polynomial time with interior-pointmethods. Alas, the world is neither (always) linear nor certain. In this paper wefocus on the latter shortcoming of LP-based modeling, that is, the presence ofuncertainty in the problem data.

Assuming certainty equivalence offers a way to deal with uncertainty: for everyuncertain data element use a nominal value (usually its mean) and form a nominal

formulation which remains an LP formulation. A solution obtained in this manner,however, is non-robust in the sense that even small changes in the problem datacan easily render the solution infeasible. For some applications, such a solutioncould be useless.

For this reason, the classical robust linear optimization approach focused ona formulation, which we will refer to as the fat formulation, whose solution is im-mune to data uncertainty (i.e., protected against infeasibility). In the early 1970’s,Soyster [Soy73] considered a convex optimization problem with a linear objectivefunction and a set-inclusive constraint that requires nonnegative combinations ofconvex sets to be contained in another convex set. As a special case, Soyster notedthat the problem can be viewed as an “inexact LP problem,” where each columnvector of the constraint matrix A is only known to belong to a convex set. Inour terminology, it is the fat formulation of an LP problem where the constraintmatrix has column-wise uncertainty. Soyster showed that the fat formulation canbe recast as an LP formulation.

Ben-Tal and Nemirovski [BTN99] pointed out that the case of column-wiseuncertainty used in [Soy73] is extremely conservative. They instead consideredrow-wise uncertainty where the rows of the constraint matrix A are known tobelong to given convex sets. In this case, they argued that the fat formulationdoes not necessarily lead to an LP formulation; for example, when the uncertaintysets for the rows of A are ellipsoids, the fat formulation turns out to be a conicquadratic problem.

Clearly, the guaranteed feasibility makes the fat formulations (i.e., the worst-case approach) of [Soy73] and [BTN99] suitable for applications in which infeasi-bility cannot be accepted at all (e.g., design of engineering structures like bridgesconsidered in Ben-Tal and Nemirovski [BTN99]). When applications can toleratea small chance of infeasibility, however, solutions from those formulations tendto be too conservative. This is especially the case if the worst-case occurs veryrarely. Hence for the latter class of applications, methods that produce less con-servative solutions with certain probabilistic guarantees of feasibility could proveto be useful.

To that end, Bertsimas and Sim [BS04] considered a “relaxed” robust lin-ear optimization. In contrast to the column-wise and row-wise uncertainty usedin [Soy73] and in [BTN99], respectively, they considered element-wise uncertainty

Page 3: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 3

where each uncertain element of the constraint matrix A is modeled as an indepen-dent bounded random variable. They assumed that the probability distributions ofthe uncertain elements are unknown except that they are symmetric. To constructa less conservative formulation than the fat formulation, which we will refer toas the robust formulation, they used a set of parameters that “restrict” the vari-ability of the uncertain elements. They showed that their robust formulation canbe recast as an LP formulation, which is favorable from a computational pointof view. The difference between the objective values of the nominal formulationand their robust formulation was termed the price of robustness. For a probabilisticguarantee of feasibility of an optimal solution of their robust formulation, theyestablished upper bounds on the constraint violation probability of the solution.

As in [BS04], the robust model used in this paper is a special case of the robustcounterpart of a linear optimization model when uncertain coefficients are assumedto belong to a D-norm induced uncertainty set (see [BPS04]). Unlike, however, theearlier work, we exploit distributional information on the uncertain elements in ro-bust linear optimization. We will show that by using full or limited distributionalinformation, one can obtain a better solution than earlier approaches, but withthe same probabilistic guarantee of feasibility. The crux of the matter is that byexploiting distributional information, we can establish stronger bounds on the con-straint violation probability. This enables us to “inject” less conservatism into theformulation, which in turn yields a more cost-effective solution. In particular, weestablish three types of bounds on the constraint violation probability: (i) boundsthat use the full distribution of the uncertain problem data, (ii) bounds that usethe full distribution of data and the optimal solution to the robust problem, and(iii) bounds that use up to the first two moments of the problem data.

Our motivation comes from the emerging abundance of data in many real-world applications. By mining these data suitably, one can obtain highly reliabledistributional information and, as we will show, put it to good use. When dataare not available and distributional information can not be obtained, then theapproach of [BS04] is appropriate. However, our work can help quantify the benefitsthat can result from data collection and implementation of estimation techniquesfor obtaining distributional information. We expect that in many settings thesebenefits can exceed the associated costs. In this spirit, one can think of the gain inobjective value that stems from our robust optimization approach as the estimation

discount on the price of robustness.

The work in this paper has parallels with ideas developed in the context ofrobust control. Using the terminology in [DDB95], we are interested in parametricuncertainty and consider a stochastic model of uncertainty rather than a deter-ministic model which would necessitate a worst case analysis. To illustrate theeffectiveness of our robust optimization approach, we apply it to a stochastic in-ventory control problem.1 Inventory control has, of course, a long history goingback to the seminal results of Clark and Scarf [CS60]. We are interested in inven-tory control problems that enforce explicit quality of service (QoS) or service level

constraints. Under stochastic demand and production models, such problems havebeen analyzed in Paschalidis and Liu [PL03], Bertsimas and Paschalidis [BP01],Paschalidis et al. [PLCP04], and Del Vecchio and Paschalidis [DVP06] using large

1 For other applications of robust optimization, refer to Bertsimas et al. [BBC11] and Ben-Tal et al. [BTGN09].

Page 4: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

4 Seong-Cheol Kang et al.

deviations techniques. Yet, the analysis is complex and uses the special structureof the models to solve the corresponding large deviations problem. Formulatingthe inventory control problems as static optimization problems ignores the inher-ent uncertainty but has the advantage that many modeling complexities (e.g., leadtimes, ordering costs, etc.) can be more easily incorporated.

Bertsimas and Thiele [BT06] addressed inventory control problems to minimizetotal ordering, holding, and shortage costs from the robust optimization perspec-tive of [BS04]. A different robust model for inventory management was consideredby See and Sim [SS10]. Bienstock and Ozbay [BO08] propose an algorithm to iter-atively solve a robust min-max problem for setting the base-stock level in a singlebuffer under uncertain demand. Here, we consider an inventory control problemsimilar to one in [BT06], but our approach differs from theirs in two aspects. First,we incorporate certain QoS constraints into the problem instead of using shortagecosts. QoS constraints are often used in managing supply chains, partly becauseshortage costs are hard to quantify. Our second, and more subtle, difference with[BT06] is the way we construct the robust formulation. This will be elaborated onin Section 3.2.

The paper is organized as follows. In Section 2, and in the form of background,we deal with robust optimization for an LP problem with element-wise uncertainty.We follow [BS04] to construct the robust formulation for the LP problem and deriveits equivalent LP formulation. Using distributional information on the uncertainelements, we develop new stronger bounds on the constraint violation probability.We explain that stronger bounds lead to an equally robust solution with a betterobjective value (i.e., a more cost-effective solution under the same probabilisticguarantee of feasibility). The case of using limited distributional information inthe form of the first and second moments as well as full distributional information isinvestigated. In Section 3, we consider a discrete-time stochastic inventory controlproblem with QoS constraints. We form the robust formulation of the problemand show that the optimal ordering quantities of this formulation correspond to abase-stock policy. Using the results of Section 2, bounds on the probability that theoptimal ordering quantities violate the QoS constraints are developed. Throughnumerical tests, we demonstrate the cost-effectiveness of our approach. Finally,concluding remarks are given in Section 4.

Notational Conventions: Throughout the paper, we use boldface lowercase let-ters to denote vectors and boldface uppercase letters to represent matrices. Oc-casionally we use boldface uppercase Greek letters to denote vectors. All vectorsare assumed to be column vectors. When we specify the components of a vector,however, we write x = (x1, . . . , xn) for the column vector x to save space. x′ repre-sents the transpose of x; however sometimes it also means a vector different fromx. It will be clear from the context which one is the case. The vector of all zeros isdenoted by 0. A = (aij) denotes a matrix whose (i, j)th element is aij . We will useai to denote the ith row of A which, using our convention, we will assume to be acolumn vector. P[A] means the probability of the event A. For a random variableX, we write E[X] and Var(X) for its mean and variance, respectively. The floorfunction of a real number x, denoted by ⌊x⌋, returns the largest integer less thanor equal to x. The ceiling function, denoted by ⌈x⌉, gives the smallest integer notless than x.

Page 5: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 5

2 Robust Linear Optimization

2.1 A Linear Programming Problem with Data Uncertainty

Let us consider the LP problem2

maximize c′x

subject to Ax ≤ b

l ≤ x ≤ u,

(1)

where c, l,u ∈ Rn, b ∈ R

m, A = (aij) is an m× n matrix, and x ∈ Rn is the vector

of decision variables. We assume, without loss of generality, that only the elementsof the matrix A are subject to uncertainty. Indeed, if c and b are also uncertain,the problem can be reformulated so that uncertainty exists only in the entries ofA ([BT06], [BTGN09]).

Each uncertain element of A is modeled as a random variable whose range issymmetrically bounded around its mean (or the nominal value). In particular, weassume that aij ∈ [aij − aij , aij + aij ], where aij = E[aij ] and aij ≥ 0. Note thatif aij = 0, the corresponding aij is deterministic and takes the value aij . For eachrow i of A, we define Ji , {j | aij > 0}, i.e., Ji , {j | aij is uncertain}. We assumethat aij , for all i and j ∈ Ji, are independent of each other. It will be generallyassumed that the probability distribution of each uncertain aij is known. However,we will also consider the case that only limited distributional information on aij

is available, such as the first moment, or the first and second moments of aij . Thefollowing symmetry assumption on the distribution will be in effect for some ofthe results we will present.

Assumption A

For all i, j ∈ Ji, the probability distribution of aij is symmetric over [aij−aij , aij+aij ],that is,

Faij (aij − a) = 1 − Faij (aij + a), 0 ≤ a ≤ aij ,

where Faij is the cumulative distribution function of aij .

Given the data uncertainty structure for A, one may elect to solve the nominal

formulation, where each uncertain aij is replaced by its mean value:

zN = maximize c′x

subject toP

j aijxj ≤ bi, i = 1, . . . , m,

l ≤ x ≤ u.

(2)

One disadvantage of the nominal formulation is that its optimal solution is likelyto violate the constraints Ax ≤ b of (1). This leads one to consider a solution thatis guaranteed to satisfy Ax ≤ b for all realizations of the uncertain aij ’s, whilemaximizing the objective value. To obtain such a solution, one needs to solve thefat formulation

zF = maximize c′x

subject to maxai∈Ui

˘a′ix

¯≤ bi, i = 1, . . . , m,

l ≤ x ≤ u,

(3)

2 The main role of the constraints l ≤ x ≤ u is to make the feasible region bounded, so thatthe case of unbounded objective value can be avoided. Other than that, they do not play anyother role in the analysis that follows.

Page 6: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

6 Seong-Cheol Kang et al.

where the uncertainty set for the ith row, Ui, is given by

Ui ,˘ai | aij ∈ [aij − aij , aij + aij ], ∀ j

¯.

It is not difficult to see that the formulation (3) can be written as the LP formu-lation

zF = maximize c′x (4)

subject toX

j

aijxj +X

j∈Ji

aijyj ≤ bi, i = 1, . . . , m,

− yj ≤ xj ≤ yj , ∀ j ∈Sm

1 Ji, (4a)

l ≤ x ≤ u.

We note that the constraints (4a) can be replaced with −y ≤ x ≤ y. This possiblyadds to the formulation the additional constraints −yj ≤ xj ≤ yj for all j /∈

Sm1 Ji.

Their presence, however, does not alter the optimal solution of (4). The followingLemma is pretty standard in robust optimization, hence, we skip the proof.

Lemma 1 Let (x, y) be an optimal solution of (4). Then x is a feasible solution of

(1) for every possible realization of A. Moreover, zF ≤ zN .

As Lemma 1 implies, by solving the fat formulation, one may obtain a solutionwith an inferior objective value in exchange for its full robustness to (i.e., immunityagainst) data uncertainty.

2.2 The Robust Formulation

In order to construct a formulation that is less conservative than the fat formula-tion (3), we follow Bertsimas and Sim [BS04] and introduce the uncertainty budget

Γi ∈ [0, |Ji|] for each row i = 1, . . . , m of the matrix A. The role of the uncertaintybudget Γi is to impose the uncertainty budget constraint

X

j∈Ji

|aij − aij |

aij≤ Γi.

That is, Γi limits the deviations of aij , ∀ j ∈ Ji, from their respective mean values

by imposing an ℓ1-norm constraint on the vector of` aij−aij

aij

´for j ∈ Ji. The

motivation for this constraint comes from the expectation that not all uncertainelements of the ith row can take their extreme values at the same time. We definethe restricted uncertainty set Ri(Γi) as

Ri(Γi) ,

ai | aij ∈ [aij − aij , aij + aij ], ∀ j;

X

j∈Ji

|aij − aij |

aij≤ Γi

ff.

In other words, Ri(Γi) is the set of all realizations of ai that satisfy the uncertaintybudget constraint.

We define the robust formulation as

zR(Γ) = maximize c′x

subject to maxai∈Ri(Γi)

˘a′ix

¯≤ bi, i = 1, . . . , m,

l ≤ x ≤ u,

(5)

Page 7: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 7

where zR(Γ) denotes the optimal objective value for a given Γ = (Γ1, . . . , Γm). Thefollowing Lemma establishes a useful monotonicity property; we include a proofin Appendix A for completeness.

Lemma 2 zR(Γ) is nonincreasing as Γ increases componentwise, and zF ≤ zR(Γ) ≤zN .

Based on Lemma 2, one is able to strike a balance between the robustnessof a solution and its objective value by varying Γi between 0 and |Ji| for all i.If Γi = |Ji|, the ith constraint is fully protected against violation. If Γi = 0, onthe other hand, the ith constraint is not protected. In this respect, Γi can also beviewed as the level of protection for the ith constraint. The difference zN − zR(Γ)is the degradation of the objective value that results from improving the level ofprotection of the constraints by selecting Γ.

We now show that the robust formulation (5) can be recast as an LP for-mulation. The formulation we obtain is equivalent to the one in Bertsimas andSim [BS04]; we provide a more concise proof for completeness and because thederivation is useful in deriving our bounds and in generalizing to asymmetricallybounded random variables as well as different types of the uncertainty budgetconstraint.

For any x, the maximization problem in the ith constraint of (5) can be writtenas

maximize a′ix (6)

subject to aij ≤ aij + aij , ∀ j, (6a)

aij ≥ aij − aij , ∀ j, (6b)X

j∈Ji

wij ≤ Γi, (6c)

wij ≥ (aij − aij)/aij , ∀ j ∈ Ji, (6d)

wij ≥ −(aij − aij)/aij , ∀ j ∈ Ji, (6e)

wij ≥ 0, ∀ j ∈ Ji,

where aij , ∀ j and wij , ∀ j ∈ Ji are the decision variables. Let λij , µij , zi, νij , τij bethe dual variables for the constraints (6a)–(6e), respectively. Then the dual of (6)is given by (after some simplifications)

minimizeX

j

aijxj +X

j∈Ji

aij(λij + µij) + Γizi (7)

subject to λij − µij + νij − τij = xj , ∀ j ∈ Ji,

zi − aij(νij + τij) ≥ 0, ∀ j ∈ Ji,

λij , µij , νij , τij ≥ 0, ∀ j ∈ Ji,

zi ≥ 0.

Theorem 1 The robust formulation (5) is equivalent to the LP formulation

zR(Γ) = maximize c′x (8)

subject toX

j

aijxj +X

j∈Ji

aij(λij + µij) + Γizi ≤ bi i = 1, . . . , m,

Page 8: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

8 Seong-Cheol Kang et al.

λij − µij + νij − τij = xj , i = 1, . . . , m, ∀ j ∈ Ji, (8a)

zi − aij(νij + τij) ≥ 0, i = 1, . . . , m, ∀ j ∈ Ji, (8b)

λij , µij , νij , τij ≥ 0, i = 1, . . . , m, ∀ j ∈ Ji,

zi ≥ 0, i = 1, . . . , m,

l ≤ x ≤ u.

Proof: Let (x∗, λ∗ij , µ

∗ij , z

∗i , ν∗

ij , τ∗ij) be an optimal solution of (8). Let x be an

optimal solution of (5). We will establish the equivalence by showing that x∗ is afeasible solution of (5) with c′x∗ = c′x.

Fix x = x∗ in (6) and (7), and let (a∗i , w∗ij) be an optimal solution of (6). Since

(λ∗ij , µ

∗ij , z

∗i , ν∗

ij , τ∗ij) is a feasible solution of (7), the weak duality between (6) and

(7) yields

a∗i′x∗ ≤

X

j

aijx∗j +

X

j∈Ji

aij(λ∗ij + µ∗

ij) + Γiz∗i .

From a∗i′x∗ = max

ai∈Ri(Γi)

˘a′ix

∗¯and the feasibility of (x∗, λ∗

ij , µ∗ij , z

∗i , ν∗

ij , τ∗ij) to

(8), we have for all i

maxai∈Ri(Γi)

˘a′ix

∗¯≤

X

j

aijx∗j +

X

j∈Ji

aij(λ∗ij + µ∗

ij) + Γiz∗i ≤ bi.

This shows that x∗ is feasible to (5), implying that c′x∗ ≤ c′x.Next, set x = x in (6) and (7), and let (ai, wij) be an optimal solution of (6).

By the strong duality, there exists a feasible (λij , µij , zi, νij , τij) to (7) such that

maxai∈Ri(Γi)

˘a′ix

¯= a′ix =

X

j

aij xj +X

j∈Ji

aij(λij + µij) + Γizi.

Since x is feasible to (5), we have for all i

bi ≥ maxai∈Ri(Γi)

˘a′ix

¯=

X

j

aij xj +X

j∈Ji

aij(λij + µij) + Γizi.

This shows that (x, λij , µij , zi, νij , τij) satisfies the first set of the constraints of(8). Since the other constraints of (8) are also satisfied by (x, λij , µij , zi, νij , τij),it is a feasible solution of (8), from which we have c′x ≤ c′x∗. ⊓⊔

It can be shown that the LP formulation (8) can be reshaped into the followingLP formulation derived in Bertsimas and Sim [BS04]; see Appendix B for anexplanation.

zR(Γ) = maximize c′x (9)

subject toX

j

aijxj + Γizi +X

j∈Ji

pij ≤ bi, i = 1, . . . , m,

zi + pij ≥ aijyj , i = 1, . . . , m, ∀ j ∈ Ji,

pij ≥ 0, i = 1, . . . , m, ∀ j ∈ Ji,

zi ≥ 0, i = 1, . . . , m,

− yj ≤ xj ≤ yj , ∀ j ∈Sm

1 Ji,

yj ≥ 0, ∀ j ∈Sm

1 Ji,

l ≤ x ≤ u.

Page 9: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 9

2.3 Bounds on the Constraint Violation Probability

For a given Γ > 0, let x∗ be an optimal solution of the robust formulation (5),which is obtained by solving the LP formulation (8) (or (9)). Let us call x∗ therobust solution. To avoid degenerate cases and without loss of generality, we assume|x∗| > 0.3 Unless Γi = |Ji| for all i, x∗ may well violate the constraints Ax ≤ b of(1) due to the uncertainty in A.

Computing the constraint violation probability PˆP

j aijx∗j > bi

˜, i = 1, . . . , m,

exactly is often a challenging task. Therefore one would be interested in upperbounds on this probability. Bertsimas and Sim [BS04] assumed that the probabilitydistributions of the random aij ’s are unknown except that they are symmetric.Under this assumption, they derived the following bound on the ith constraintviolation probability:

P

»X

j

aijx∗j > bi

–≤ exp

„−

Γ 2i

2|Ji|

«. (10)

Henceforth, we will use the terminology “Bound (10)” to refer to the right handside of (10) and similar terminologies for such bounds. Bound (10) is an a priori

bound in the sense that it is not a function of the robust solution x∗. Moreover,it does not use any distributional information on aij , ∀ j ∈ Ji, other than thesymmetry of the probability distributions. For these reasons, Bound (10) can beweaker than other bounds that exploit the distributional information and/or therobust solution x∗.

In this section, we develop new bounds on the constraint violation probability,which are stronger than Bound (10). This will be accomplished by making useof full or limited distributional information on aij ’s. Obtaining stronger boundsis important because, as we shall show later, they lead to a more cost-effectivesolution under the same probabilistic guarantee of feasibility. The following lemmawill be used in the results that we will present.

Lemma 3 Let Si be a subset of Ji such that |Si| = ⌊Γi⌋, and let ti ∈ Ji \ Si. Then

maxai∈Ri(Γi)

˘a′ix

∗¯=

X

j

aijx∗j + max

Si∪{ti}

X

j∈Si

aij |x∗j | + (Γi − ⌊Γi⌋)aiti

|x∗ti|

ff, (11)

where it is understood that {ti} = ∅ if Γi is an integer.

Proof: For all j /∈ Ji, the left hand side of (11) takes the constant valuePj /∈Ji

aijx∗j . Sort x∗

j , ∀ j ∈ Ji, in the nonincreasing order of |x∗j |. Let x∗

s1, . . . , x∗

s⌊Γi⌋

be the first ⌊Γi⌋ elements in that order. To maximize the left hand side of (11), fork = 1, . . . , ⌊Γi⌋, we choose aisk

+ aiskif x∗

sk> 0 and aisk

− aiskotherwise. When

Γi is not an integer, we can consider one more element, x∗s⌊Γi⌋+1

. If it is positive,

we choose ais⌊Γi⌋+1+ (Γi − ⌊Γi⌋)ais⌊Γi⌋+1

; otherwise, we choose ais⌊Γi⌋+1− (Γi −

⌊Γi⌋)ais⌊Γi⌋+1. Note the scaling factor (Γi − ⌊Γi⌋) in the last step, which ensures

ai ∈ Ri(Γi). The right hand side of (11) achieves the same goal. ⊓⊔

3 If not, we can always fix to zero all zero components of x∗ and recast the LP problem ina lower-dimensional space.

Page 10: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

10 Seong-Cheol Kang et al.

2.3.1 A Distribution-Dependent Bound

We first derive an a priori bound that utilizes the probability distributions of therandom aij ’s. Let ηij = (aij − aij)/aij for all i and j ∈ Ji. Define the logarithmic

moment generating function of ηij as Ληij (θ) , logE[eθηij ].

Theorem 2 Let Assumption A be in effect.

(a) For the ith constraint

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θΓi −

X

j∈Ji

Ληij (θ)

ff«. (12)

(b) Bound (12) < Bound (10).

Proof: (a) The proof of this part proceeds similarly to the proofs of Proposi-tion 2 and Theorem 2 of Bertsimas and Sim [BS04]. Under Assumption A, therandom variables ηij are symmetrically distributed over [−1, 1]. Let S∗

i ∪ {t∗i } =arg maxSi∪{ti}

˘Pj∈Si

aij |x∗j | + (Γi − ⌊Γi⌋)aiti

|x∗ti|¯

in (11), where again it is un-derstood that {t∗i } = ∅ if Γi is an integer. Let r = argminj∈S∗

i ∪{t∗i }aij |x

∗j |. (Notice

that if Γi is an integer, r = argminj∈S∗i

aij |x∗j |; otherwise, r = t∗i .) We have

P

»X

j

aijx∗j > bi

–= P

»X

j

aijx∗j +

X

j∈Ji

aijx∗jηij > bi

= P

» X

j∈Ji

aij |x∗j |ηij > bi −

X

j

aijx∗j

≤ P

» X

j∈Ji

aij |x∗j |ηij >

X

j∈S∗i

aij |x∗j | + (Γi − ⌊Γi⌋)ait∗i

|x∗t∗i|

= P

» X

j∈Ji\S∗i

aij |x∗j |ηij >

X

j∈S∗i

aij |x∗j |(1 − ηij) + (Γi − ⌊Γi⌋)ait∗i

|x∗t∗i|

≤ P

» X

j∈Ji\S∗i

aij |x∗j |ηij > air|x

∗r |

X

j∈S∗i

(1 − ηij) + Γi − ⌊Γi⌋

ff–

= P

» X

j∈Ji\S∗i

aij |x∗j |

air|x∗r |

ηij +X

j∈S∗i

ηij > Γi

= P

» X

j∈Ji

γijηij > Γi

–≤ P

» X

j∈Ji

γijηij ≥ Γi

–,

where

γij =

(aij |x

∗j |/air|x

∗r | if j ∈ Ji \ S∗

i ,

1 if j ∈ S∗i .

The first equality follows from the definitions of ηij ’s, and the second equality fromthe fact that aijx

∗jηij is equal to aij |x

∗j |ηij in distribution. The first inequality is

due to the feasibility of x∗ in (5), namely (cf. Lemma 3),

maxai∈Ri(Γi)

˘a′ix

∗¯=

X

j

aijx∗j +

X

j∈S∗i

aij |x∗j | + (Γi − ⌊Γi⌋)ait∗i

|x∗t∗i| ≤ bi.

Page 11: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 11

The second inequality follows from air|x∗r | = minj∈S∗

i ∪{t∗i }aij |x

∗j |. The fourth

equality holds becauseP

j∈S∗i

1 = ⌊Γi⌋. Note that 0 ≤ γij ≤ 1, ∀ j ∈ Ji, be-

cause air|x∗r | ≥ aij |x

∗j |, ∀ j ∈ Ji \ S∗

i . (If air|x∗r | < aij |x

∗j | for some j ∈ Ji \ S∗

i , r

cannot belong to S∗i ∪ {t∗i }.)

Using Markov’s inequality, for all θ ≥ 0 we obtain

P

» X

j∈Ji

γijηij ≥ Γi

–≤ e−θΓiE

ˆeθ

P

j∈Jiγijηij

˜

= e−θΓiY

j∈Ji

Eˆeθγijηij

˜

= e−θΓiY

j∈Ji

Z 1

−1

∞X

k=0

(θγijη)k

k!dFηij (η)

≤ e−θΓiY

j∈Ji

Z 1

−1

∞X

k=0

(θη)k

k!dFηij (η) (13)

= e−θΓiY

j∈Ji

Eˆeθηij

˜

= exp

„−θΓi +

X

j∈Ji

Ληij (θ)

«.

The first equality follows from the independence of the random variables ηij , and

the second equality from the Maclaurin series for eθγijηij . The second inequal-ity is due to the following two properties that result from the symmetry of theprobability distribution of ηij and 0 ≤ γij ≤ 1: for all k = 0, 1, . . .,

Z 1

−1

(θγijη)2k+1

(2k + 1)!dFηij (η) =

Z 1

−1

(θη)2k+1

(2k + 1)!dFηij (η) = 0,

Z 1

−1

(θγijη)2k

(2k)!dFηij (η) ≤

Z 1

−1

(θη)2k

(2k)!dFηij (η).

Optimizing over θ, we obtain

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θΓi −

X

j∈Ji

Ληij (θ)

ff«.

(b) For any θ ≥ 0 and symmetric probability distribution for aij , ∀ j ∈ Ji,

θΓi −X

j∈Ji

Ληij (θ) = θΓi −X

j∈Ji

log

Z 1

−1

∞X

k=0

(θη)k

k!dFηij (η)

= θΓi −X

j∈Ji

log

Z 1

−1

∞X

k=0

(θη)2k

(2k)!dFηij (η)

≥ θΓi −X

j∈Ji

log∞X

k=0

θ2k

(2k)!

Z 1

−1dFηij (η)

= θΓi −X

j∈Ji

log∞X

k=0

θ2k

(2k)!

Page 12: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

12 Seong-Cheol Kang et al.

≥ θΓi −X

j∈Ji

log∞X

k=0

θ2k

2kk!

= θΓi −X

j∈Ji

log eθ2/2,

where the second equality follows from the symmetry of the probability distri-butions, and the first inequality from ηij ∈ [−1, 1]. Note that the equality holdsthroughout only when P[ηij = −1] = P[ηij = 1] = 1/2 and θ = 0. However since

∂θ

θΓi −

X

j∈Ji

Ληij (θ)

ff

θ=0

= Γi −X

j∈Ji

E[ηij ] = Γi > 0,

θ = 0 cannot maximize θΓi −P

j∈JiΛηij (θ). Therefore

exp

„− sup

θ≥0

θΓi −

X

j∈Ji

Ληij (θ)

ff«< exp

„− sup

θ≥0

θΓi −

X

j∈Ji

log eθ2/2

ff«

= exp

„− sup

θ≥0

θΓi − |Ji|

θ2

2

ff«

= exp

„−

Γ 2i

2|Ji|

«.

⊓⊔

Since Ληij (θ) is a convex function of θ, supθ≥0 {·} in Bound (12) is a convex op-timization problem, which can be efficiently solved. Like Bound (10), Bound (12) isnonincreasing in Γi. The monotonicity of Bounds (10) and (12) and Theorem 2(b)lead to the following important implication: to ensure the same constraint violationprobability, Bound (12) requires smaller Γi than Bound (10) does. Since zR(Γ) isa nonincreasing function of Γ, one can achieve a higher zR(Γ) by using Γi requiredby Bound (12), while maintaining the same constraint violation probability. Thefollowing corollary formalizes this deduction.

Corollary 1 Let ǫi be the maximum allowable violation probability for the ith con-

straint. Let ΓB = (ΓB1 , . . . , ΓB

m) (respectively, ΓD) be such that ΓBi (respectively,

ΓDi ) is the smallest Γi satisfying Bound (10) ≤ ǫi (respectively, Bound (12) ≤ ǫi)

for i = 1, . . . , m. Then zR(ΓB) ≤ zR(ΓD).

In other words, Bound (12) enables one to obtain an equally robust solution witha better objective value. The difference zR(ΓD)− zR(ΓB) is the gain in the objec-tive value that results from exploiting distributional information on the uncertainproblem data, which we refer to as the estimation discount on the price of robustness.We remark that ΓB

i and ΓDi in Corollary 1 can be determined by binary search

on [0, |Ji|] since Bounds (10) and (12) are monotone in Γi.

2.3.2 Solution-Dependent Bounds

In the derivation of Bound (12), there are a few steps that can potentially causethe bound to be loose. For instance, the inequality (13), which was used to makeBound (12) independent of the robust solution x∗, is such a step. Tightening the

Page 13: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 13

bound leads us to obtain the bound in Theorem 3, which relies on the proba-bility distributions of the random aij ’s as well as the robust solution x∗. (Thisbound can be called an a posteriori bound in the sense that it is a function ofx∗.) The derivation is actually simpler than that of Bound (12) and the resultingbound is stronger than Bound (12). A host of numerical examples we present laterdemonstrate that the difference can be dramatic.

Theorem 3 Let Assumption A be in effect.

(a) Let Ci(x∗) = bi −

Pj aijx

∗j for all i and βij = aij |x

∗j | for all i and j ∈ Ji. Then

for the ith constraint

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θCi(x

∗) −X

j∈Ji

Ληij (θβij)

ff«. (14)

(b) Bound (14) ≤ Bound (12).

Proof: (a) Using the random variables ηij introduced earlier and following thesteps of the proof of Thm. 2(a), we have

P

»X

j

aijx∗j > bi

–≤ P

» X

j∈Ji

βijηij ≥ Ci(x∗)

–. (15)

From Markov’s inequality, for all θ ≥ 0

P

» X

j∈Ji

βijηij ≥ Ci(x∗)

–≤ e−θCi(x

∗)Eˆeθ

P

j∈Jiβijηij

˜

= e−θCi(x∗)

Y

j∈Ji

Eˆeθβijηij

˜

= exp

„−θCi(x

∗) +X

j∈Ji

Ληij (θβij)

«.

Optimizing over θ, we obtain

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θCi(x

∗) −X

j∈Ji

Ληij (θβij)

ff«.

(b) As in the proof of Theorem 2(a), let S∗i ∪{t∗i } = arg maxSi∪{ti}

˘Pj∈Si

aij |x∗j |+

(Γi − ⌊Γi⌋)aiti|x∗

ti|¯

and r = argminj∈S∗i ∪{t∗i }

aij |x∗j |. Consider the probability in

(15) and scale both sides ofP

j∈Jiβijηij ≥ Ci(x

∗) by 1/βir. Then following theremaining steps in (a), we obtain

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θCi(x

∗)

βir−

X

j∈Ji

Ληij

“θβij

βir

”ff«. (16)

This bound is equivalent to Bound (14). We will show that Bound (16) ≤

Bound (12). Let γij ,βij

βirfor j ∈ Ji \ S∗

i and δij ,βij

βirfor j ∈ S∗

i . Note that0 ≤ γij ≤ 1 and δij ≥ 1.

Page 14: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

14 Seong-Cheol Kang et al.

Since x∗ is a feasible solution of (5) (cf. Lemma 3),

bi −X

j

aijx∗j ≥

X

j∈S∗i

aij |x∗j | + (Γi − ⌊Γi⌋)ait∗i

|x∗t∗i|. (17)

Multiplying both sides of (17) by 1/βir, we obtain

Ci(x∗)

βir≥

X

j∈S∗i

δij + (Γi − ⌊Γi⌋)γit∗i=

X

j∈S∗i

δij + (Γi − ⌊Γi⌋),

where the equality follows from the fact that if Γi is not an integer, then r = t∗i .Therefore

exp

„− sup

θ≥0

θCi(x

∗)

βir−

X

j∈Ji

Ληij

“θβij

βir

”ff«

= exp

„− sup

θ≥0

θCi(x

∗)

βir−

X

j∈Ji\S∗i

Ληij (θγij) −X

j∈S∗i

Ληij (θδij)

ff«

≤ exp

„− sup

θ≥0

X

j∈S∗i

θδij + θ(Γi − ⌊Γi⌋) −X

j∈Ji\S∗i

Ληij (θγij) −X

j∈S∗i

Ληij (θδij)

ff«.

(18)

Because 0 ≤ γij ≤ 1, we also have

exp

„− sup

θ≥0

θΓi −

X

j∈Ji

Ληij (θ)

ff«

= exp

„− sup

θ≥0

θ ⌊Γi⌋ + θ(Γi − ⌊Γi⌋) −

X

j∈Ji\S∗i

Ληij (θ) −X

j∈S∗i

Ληij (θ)

ff«

≥ exp

„− sup

θ≥0

θ ⌊Γi⌋ + θ(Γi − ⌊Γi⌋) −

X

j∈Ji\S∗i

Ληij (θγij) −X

j∈S∗i

Ληij (θ)

ff«, (19)

where the inequality follows from Ληij (θ) ≥ Ληij (θγij) due to the symmetry of theprobability distribution of ηij over [−1, 1] (cf. the proof of Theorem 2(a)).

Now our goal is to show that exp(·) in (18) is no greater than exp(·) in (19).This is true if for all θ ≥ 0

X

j∈S∗i

θδij −X

j∈S∗i

Ληij (θδij) ≥ θ ⌊Γi⌋ −X

j∈S∗i

Ληij (θ). (20)

The inequality (20) holds if for each j ∈ S∗i

θδij − Ληij (θδij) ≥ θ − Ληij (θ). (21)

Taking the exponential function on both sides of (21) and multiplying both sidesby E[eθδijηij ]E[eθηij ], we obtain

Eˆeθ(δij+ηij)

˜≥ E

ˆeθ(1+δijηij)

˜. (22)

Page 15: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 15

The inequality (22) holds if for all realizations of ηij

δij + ηij ≥ 1 + δijηij . (23)

Since δij ≥ 1 and −1 ≤ ηij ≤ 1, the inequality (23), which can be rewritten as(δij − 1)(1 − ηij) ≥ 0, holds. ⊓⊔

Recall that Bounds (10) and (12) require only Γi (and not Γj ’s, j 6= i) for theith constraint violation probability. In contrast, Bound (14) depends on all Γi’sbecause x∗ is a function of Γ.

If Assumption A is not in effect, a slightly different bound is obtained as shownin Corollary 2. This bound can be useful because in many real-world applicationsthe symmetry assumption could be restrictive.

Corollary 2 Let Ci(x∗) = bi−

Pj aijx

∗j for all i and κij = aijx

∗j for all i and j ∈ Ji.

Then for the ith constraint

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θCi(x

∗) −X

j∈Ji

Ληij (θκij)

ff«. (24)

Proof:

P

»X

j

aijx∗j > bi

–= P

»X

j

aijx∗j +

X

j∈Ji

aijx∗jηij > bi

≤ P

» X

j∈Ji

κijηij ≥ Ci(x∗)

–,

and the remaining steps are identical to those of Theorem 3(a). ⊓⊔

2.3.3 Moment-Dependent Bounds

Bounds (12), (14), and (24) on the constraint violation probability made full useof the probability distributions of the random aij ’s to compute the moment gen-erating functions of ηij ’s. (Recall that ηij = (aij − aij)/aij .) Therefore, if theprobability distributions are not known, but instead some limited distributionalinformation is available, we need a different strategy to establish bounds on theconstraint violation probability. We employ the idea of upper bounding the mo-ment generating functions of ηij ’s (in lieu of computing them exactly) using thefirst and second moments of aij ’s and their range information. To that end, we usethe following result due to Bennett [Ben62] (also see Dembo and Zeitouni [DZ98]):Let X ≤ b be a random variable with x = E[X] and E[(X − x)2] ≤ c2 for somec > 0. Then for any θ ≥ 0,

EˆeθX˜

≤ eθx

(b − x)2

(b − x)2 + c2e−θ c2

b−x +c2

(b − x)2 + c2eθ(b−x)

ff. (25)

Theorem 4 below features two bounds. The first bound requires the robustsolution x∗ and the first and second moments (or equivalently, the mean andvariance) of aij for all i and j ∈ Ji. The second bound, on the other hand, doesnot need the second moments. As shown in the theorem, both bounds are strongerthan Bound (10). Another interesting observation is that both bounds do notrequire the symmetry assumption, Assumption A.

Page 16: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

16 Seong-Cheol Kang et al.

Theorem 4 Let Ci(x∗) = bi −

Pj aijx

∗j for all i and κij = aijx

∗j for all i and j ∈ Ji.

(a) Let σ2ij = Var(ηij) for all i and j ∈ Ji. Then for the ith constraint

P

»X

j

aijx∗j > bi

–≤

exp

„− sup

θ≥0

θCi(x

∗) −X

j∈Ji

log“ 1

1 + σ2ij

e−θ|κij |σ2

ij +σ2

ij

1 + σ2ij

eθ|κij |”ff«

. (26)

(b) For the ith constraint

P

»X

j

aijx∗j > bi

–≤ exp

„− sup

θ≥0

θCi(x

∗) −X

j∈Ji

log“1

2e−θκij +

1

2eθκij

”ff«. (27)

(c) Bound (26) ≤ Bound (27) < Bound (10).

Proof: (a) Using ηij = (aij − aij)/aij , we have

P

»X

j

aijx∗j > bi

–= P

»X

j

aijx∗j +

X

j∈Ji

aijx∗jηij > bi

≤ P

» X

j∈Ji

κijηij ≥ Ci(x∗)

–.

From Markov’s inequality, for all θ ≥ 0 we obtain

P

» X

j∈Ji

κijηij ≥ Ci(x∗)

–≤ e−θCi(x

∗)Eˆeθ

P

j∈Jiκijηij

˜

= e−θCi(x∗)

Y

j∈Ji

Eˆeθκijηij

˜.

Let X = κijηij . Since ηij ∈ [−1, 1], X ≤ |κij |. Moreover, E[X] = κijE[ηij ] = 0 andVar(X) = κ2

ijσ2ij . Then using the inequality (25), we have

Eˆeθκijηij

˜≤

κ2ij

κ2ij + κ2

ijσ2ij

e−θ

κ2ijσ2

ij

|κij | +κ2

ijσ2ij

κ2ij + κ2

ijσ2ij

eθ|κij |.

Combining the results,

P

»X

j

aijx∗j > bi

–≤ e−θCi(x

∗)Y

j∈Ji

κ2

ij

κ2ij + κ2

ijσ2ij

e−θ

κ2ijσ2

ij

|κij | +κ2

ijσ2ij

κ2ij + κ2

ijσ2ij

eθ|κij |ff

= exp

„−θCi(x

∗) +X

j∈Ji

log“ 1

1 + σ2ij

e−θ|κij |σ2

ij +σ2

ij

1 + σ2ij

eθ|κij |”«

.

The bound is obtained by optimizing over θ.(b) Since ηij ∈ [−1, 1], Var(κijηij) ≤ κ2

ij . Following the same steps as in part (a),we have for all θ ≥ 0

P

»X

j

aijx∗j > bi

–≤ e−θCi(x

∗)Y

j∈Ji

Eˆeθκijηij

˜

Page 17: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 17

≤ e−θCi(x∗)

Y

j∈Ji

κ2

ij

2κ2ij

e−θ

κ2ij

|κij | +κ2

ij

2κ2ij

eθ|κij |ff

= exp

„−θCi(x

∗) +X

j∈Ji

log“1

2e−θκij +

1

2eθκij

”«,

where the second inequality follows from the inequality (25) with X = κijηij .Optimizing over θ, we obtain the bound.

(c) To prove Bound (26) ≤ Bound (27), we will show for any fixed θ and κij

1

1 + σ2ij

e−θ|κij |σ2

ij +σ2

ij

1 + σ2ij

eθ|κij | ≤1

2e−θκij +

1

2eθκij . (28)

To simplify notation, let x , σ2ij and denote the left hand side of (28) by f(x).

Note that 0 ≤ x ≤ 1. We will first show that f(x) is a nondecreasing function.Since

df(x)

dx= −

1 + θ|κij | + θ|κij |x

(1 + x)2e−θ|κij |x +

1

(1 + x)2eθ|κij |,

showing df(x)/dx ≥ 0 is equivalent to showing

eθ|κij |+θ|κij |x ≥ 1 + θ|κij | + θ|κij |x.

This inequality holds due to the fact that ey ≥ 1 + y for any y. Therefore themaximum of f(x) is attained at x = 1, from which the inequality (28) follows.

To show Bound (27) < Bound (10), notice that Bound (27) can be viewed as aspecial case of Bound (14), where the probability distribution of ηij is defined as

fηij (η) =

(12 if η = −1, 1,

0 otherwise.

Then by invoking Theorem 3(b) and Theorem 2(b), it follows that Bound (27) <

Bound (10). ⊓⊔

The function log`

11+σ2

ij

e−θ|κij |σ2

ij +σ2

ij

1+σ2ij

eθ|κij |´

in Bound (26) can be viewed

as the logarithmic moment generating function of the random variable Yij whoseprobability distribution is

fYij(y) =

8<:

11+σ2

ij

if y = −|κij |σ2ij ,

σ2

ij

1+σ2ij

if y = |κij |.

This observation implies that supθ≥0{·} in the bound is a convex optimizationproblem. Consequently, Bound (26) can be computed efficiently. A similar argu-ment can be made for Bound (27).

Page 18: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

18 Seong-Cheol Kang et al.

2.4 The Inverse Problem

We have discussed so far how to obtain “good” bounds on the constraint violationprobability for a given Γ. We now change the perspective and consider the inverse

problem: how to find a Γ that maximizes zR(Γ) when the maximum allowableviolation probability for the ith constraint, ǫi, is given for all i. Using Bound (12),the inverse problem is posed as follows: find a Γ such that

maximize zR(Γ) (29)

subject to Bound (12) ≤ ǫi, i = 1, . . . , m.

Since zR(Γ) is a nonincreasing function of Γ (cf. Lemma 2), one needs to seek assmall a Γ as possible, which ensures that the constraints of (29) are satisfied. Sucha Γ can be found through binary search because Bound (12) is monotone in Γi (cf.Corollary 1).

As Bound (14) is stronger than Bound (12), the use of the former in place ofthe latter in (29), in principle, would result in a better objective value. However,finding an optimal Γ in this case appears to be hard unless the LP problem (1)has some special structure (see, for instance, Section 3.3 where this is the case).A suboptimal approach based on binary search is as follows: First determine thesmallest Γ, denoted by ΓD, for which Bound (12) ≤ ǫi for all i. (cf. Corollary 1).Set Γ = ΓD/2, solve (8) with Γ, and compute Bound (14) for all i. If Bound (14)yields a value smaller than ǫi for all i, set Γ = Γ/2. Otherwise, set Γ = (Γ+ΓD)/2.This process is repeated until the interval for Γ becomes reasonably small. Thesuboptimality of this approach stems from simultaneously increasing or decreasingall the components of Γ in the same proportion. Thus, one might be tempted toamend the approach with the “fine-tuning” step that performs binary search onlyon some Γi’s. However, this does not guarantee that an optimal Γ is found becauseof two reasons: First, Bound (14) for the ith constraint is a function of Γ, not ofonly Γi. Second, in general, Bound (14) is not monotone in Γi.

We note a close relation between the inverse problem (29) and the chanceconstrained problem4

maximize c′x (30)

subject to P

»X

j

aijxj > bi

–≤ ǫi, i = 1, . . . , m

l ≤ x ≤ u.

Under the given constraint violation probabilities, the chance constrained prob-lem (30) produces a better optimal objective value than the inverse problem(29). However, the chance constrained problem is hard to solve because its fea-sible set is nonconvex and/or because the deterministic analytical expression forP

ˆPj aijxj > bi

˜≤ ǫi is difficult to obtain. This intractability has been observed in

several works; see Ben-Tal et al. [BTGN09]. As a result, one then resorts to eitherconvex relaxations as in Nemirovski and Shapiro [NS06], Ben-Tal et al. [BTGN09,Chap. 2], Chen et al. [CSS07], or Monte-Carlo sampling as in Calafiore and El

4 For more on chance constrained problems, see Kall and Wallace [KW94], Birge and Lou-veaux [BL97], and Nemirovski and Shapiro [NS06].

Page 19: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 19

Ghaoui [CG06]. More recent work on chance constraints has also looked at “joint”chance constraints when one seeks to bound the probability of violating a set ofconstraints rather than an individual one; see Chen et al. [CSST10] and Zymleret al. [ZKR13].

The inverse problem we presented can serve as an approximation to the chanceconstrained problem. There are advantages in using the inverse problem: Once Γ isdetermined, the optimization problem one needs to solve remains an LP, whereasother convex relaxations of (30) often “lift” the class of the problem from an LPto a more complex optimization problem (e.g., an SOCP). Another benefit of theinverse problem is that once Γ is determined, multiple problem instances arisingfrom different c, b, l, and u can be solved using the same Γ, because the changesin those data do not affect Bound (12).

2.5 Numerical Results

We first compare the two a priori bounds, Bound (10) and Bound (12), for Γi =5 and varying |Ji|. We consider three symmetric probability distributions for arandom aij : triangle, uniform, and reverse-triangle. (The density functions of thetriangle and reverse-triangle distributions are depicted in Fig. 1.)

aij + aijaij − aij aij

1aij

aij + aijaij − aij aij

1aij

Fig. 1 The triangle (left) and reverse-triangle (right) distributions.

In Table 1, Bound (12)-T (Bound (12)-U, Bound (12)-R, respectively) denotesthe value of Bound (12) when all the random aij ’s have the triangle (uniform,reverse-triangle, respectively) distribution. The results show that Bound (12) istighter than Bound (10); in particular, when all the random aij ’s have the tri-angle distribution, the differences between the two bounds are most significant.We also observe that as the variance of the probability distribution increases (thetriangle distribution having the lowest variance and the reverse-triangle havingthe highest), Bound (12) increases as well.

We next compare the minimum Γi’s for which Bound (10) and Bound (12)are less than or equal to ǫi = 0.05, respectively. Recall that such Γi’s can bedetermined by binary search because both bounds are monotone in Γi. Table 2shows such Γi’s for various |Ji| and the three symmetric probability distributions.Γi from Bound (12) is always smaller than that from Bound (10), regardless ofprobability distributions. It is also seen that Γi from Bound (12) becomes largeras the variance of the probability distribution increases.

We now assess the estimation discount on the price of robustness for randomlygenerated LP problems; i.e., how much the objective value of the robust solution

Page 20: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

20 Seong-Cheol Kang et al.

Table 1 A priori bounds on the constraint violation probability when Γi = 5.

|Ji| Bound (10) Bound (12)-T Bound (12)-U Bound (12)-R

7 0.17 0.000002 0.0013 0.01310 0.29 0.00028 0.017 0.06720 0.54 0.022 0.15 0.2830 0.66 0.08 0.28 0.4340 0.73 0.15 0.39 0.5350 0.78 0.22 0.47 0.61

Table 2 Minimum Γi’s for which PˆP

j aijx∗j > bi

˜

≤ 0.05.

|Ji| From Bound (10) From Bound (12)-T From Bound (12)-U From Bound (12)-R

10 7.76 3.12 4.34 5.2550 17.34 7.06 9.97 12.16100 24.52 10.01 14.12 17.29200 34.67 14.17 20.02 24.47500 54.81 22.34 31.62 38.701000 77.64 31.62 44.68 54.81

improves when Bound (12) is used, compared to Bound (10) being used (cf. Corol-lary 1). Consider a 10×10 matrix A, where all elements are assumed to be random,i.e., |Ji| = 10 for all i. We further assume that all aij are uniform random variablesand that P

ˆPj aijx

∗j > bi

˜≤ 0.05 is required for all i.

A data set (c,b, l,u, aij , aij), which constitutes a problem instance, is specifiedas follows: cj is randomly selected from [−50, 50] for all j; bi is randomly selectedfrom [0, 100] for all i; lj is randomly chosen from [−20, 0] for all j; uj is randomlychosen from [0, 20] for all j; aij is randomly drawn from [−100, 100] for all i and j;aij is randomly drawn from [1, 50] for all i and j. In this way, we generate 20 datasets (i.e., 20 problem instances).

In order to guarantee at most 0.05 violation probability for each constraint,Bound (10) and Bound (12) require ΓB

i = 7.76 and ΓDi = 4.34 respectively for each

i (see Table 2). Table 3 reports zR(ΓB) and zR(ΓD) for each problem instance aswell as the averages over all the problem instances. The results demonstrate thatby using the distribution-dependent bound (12), one can improve the objective valueby more than 50% in some instance and 23% on average compared to using thedistribution-independent bound (10), without compromising the level of robustnessof a solution.

For each problem instance, we also compute Bound (14) for each i using therobust solution x∗ of (8) where Γ is set to ΓD. Among these bound values, the max-imum is reported in Table 3. It is notable that the solution-dependent bound (14)yields significantly smaller values than 0.05.

3 An Inventory Control Problem with Quality of Service Constraints

3.1 Problem Setting

Consider a single-station single-item stochastic inventory control problem over N

discrete time periods. We use the following notation: Let xk denote the inventory

Page 21: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 21

Table 3 Robust objective values and solution-dependent bound.

Instance zR(ΓB) zR(ΓD)zR(ΓD)−zR(ΓB)

zR(ΓB)× 100% Bound (14)

1 25.71 34.58 34.5% 0.0016092 31.86 36.69 15.2% 0.0010873 308.16 353.55 14.7% 0.0000884 31.36 45.58 45.4% 0.0006005 22.13 24.96 12.8% 0.0026616 95.83 107.49 12.2% 0.0005637 53.58 61.60 15.0% 0.0026478 182.74 229.15 25.4% 0.0007309 84.90 122.26 44.0% 0.00055310 387.36 464.09 19.8% 0.00008111 35.37 49.86 41.0% 0.00666412 103.20 113.74 10.2% 0.00017413 10.00 11.54 15.4% 0.00383414 66.37 78.61 18.4% 0.00126415 52.79 61.72 16.9% 0.00052116 48.71 59.85 22.9% 0.00178317 18.53 23.77 28.3% 0.00238418 71.07 105.99 49.1% 0.00278619 66.32 68.07 2.6% 0.00009920 96.72 148.01 53.0% 0.000223

Average 89.64 110.06 22.8% -

at the beginning of the kth period; wk the demand during the kth period; uk thestock ordered at the beginning of the kth period; c the ordering cost per unit stock;and h the holding cost per unit stock per period.

We assume that each demand wk is an independent random variable takingvalues from [wk − wk, wk + wk], where wk > wk > 0 and wk = E[wk]. The fol-lowing symmetry assumption on the probability distribution of wk is analogous toAssumption A and will be needed for some of the results later on.

Assumption B

For all k, the probability distribution of wk is symmetric over [wk − wk, wk + wk], that

is,

Fwk(wk − w) = 1 − Fwk(wk + w), 0 ≤ w ≤ wk,

where Fwk is the cumulative distribution function of wk.

We also assume that the stock ordered at the beginning of the kth period isdelivered instantly, i.e., the lead time is zero. If we further assume that excessdemand is backlogged and filled as soon as additional stock becomes available,inventory evolves according to

xk+1 = xk + uk − wk, k = 1, . . . , N. (31)

The following quality of service (QoS) constraints will be incorporated into theinventory control problem:

xk + uk ≥ wk, k = 1, . . . , N.

We consider an inventory control problem at the strategic level, where the order-ing decisions should be made before any of demands are observed. A few examples

Page 22: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

22 Seong-Cheol Kang et al.

that fit this case are when it takes considerable amount of time to produce (or ship)items or when the supplier allows discounts for the orders placed in advance. Wewill mention how to adjust our model to the operational inventory control problemat the end of Section 3.3.

The strategic inventory control problem with the QoS constraints, which aimsto minimize total ordering and holding costs, is formulated as the following LPproblem with data uncertainty:

minimizeNX

k=1

(cuk + hxk+1) (32)

subject to xk+1 = xk + uk − wk, k = 1, . . . , N,

xk + uk ≥ wk, k = 1, . . . , N,

uk ≥ 0, k = 1, . . . , N,

where x1 is given. From the inventory evolution equations (31), we obtain xk+1 =

x1 +Pk

j=1(uj − wj). Thus we can eliminate xk, k = 2, . . . , N , from (32), yielding

minimizeNX

k=1

`c + h(N − k + 1)

´uk + Nhx1 − h

NX

k=1

(N − k + 1)wk (33)

subject tokX

j=1

uj ≥ −x1 +kX

j=1

wj , k = 1, . . . , N,

uk ≥ 0, k = 1, . . . , N.

By defining ak , N − k + 1, k = 1, . . . , N , and introducing the auxiliary variable z

for the objective function, we can rewrite (33) as

minimize z (34)

subject tokX

j=1

uj ≥ −x1 +kX

j=1

wj , k = 1, . . . , N,

z −NX

k=1

(c + hak)uk ≥ Nhx1 − h

NX

k=1

akwk,

uk ≥ 0, k = 1, . . . , N.

If we view (34) in terms of the generic LP problem (1), the vector b is uncertainbecause the demands wk are random elements. The coefficients of the decisionvariables, u = (u1, . . . , uN ) and z, are not subject to uncertainty, i.e., the matrixA is deterministic. As discussed in Section 2.1, we may transform (34) in such away that only the elements of the matrix A are random. In this particular problem,however, it will be more convenient not to do so.

3.2 The Robust Inventory Control Problem

When we presented the robust formulation for the LP problem (1) in Section 2.2,the uncertainty budget Γi was introduced for each row i of the matrix A, i.e., for

Page 23: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 23

each constraint of the LP problem. It was because each constraint has a distinctset of random elements. Put differently, the random elements in row i affect onlythe ith constraint. This is not the case for the inventory control problem (34).Each random element wk influences more than one constraint. For instance, w1 isinvolved in all of the constraints. Therefore, we believe that it is more appropriateto use a single uncertainty budget Γ globally for the problem. (We point out thatBertsimas and Thiele [BT06] used a different Γk for each period k.)

The uncertainty budget Γ ∈ [0, N ] imposes the uncertainty budget constraint

NX

k=1

|wk − wk|

wk≤ Γ.

Define the restricted uncertainty set R(Γ ) as

R(Γ ) ,

w | wk ∈ [wk − wk, wk + wk], ∀ k;

X

k

|wk − wk|

wk≤ Γ

ff.

The robust formulation is then given by

zR(Γ ) = minimize z (35)

subject tokX

j=1

uj ≥ maxw∈R(Γ )

−x1 +

kX

j=1

wj

ff, k = 1, . . . , N,

z −NX

k=1

(c + hak)uk ≥ maxw∈R(Γ )

Nhx1 − h

NX

k=1

akwk

ff,

uk ≥ 0, k = 1, . . . , N.

For k = 1, . . . , ⌊Γ ⌋, we have

maxw∈R(Γ )

−x1 +

kX

j=1

wj

ff= −x1 +

kX

j=1

(wj + wj).

For k = ⌊Γ ⌋ + 1, . . . , N , let Sk be a subset of the index set {1, . . . , k} such that|Sk| = ⌊Γ ⌋. In addition, let tk be an index such that tk ∈ {1, . . . , k} \ Sk. Then wehave

maxw∈R(Γ )

−x1 +

kX

j=1

wj

ff= −x1 +

kX

j=1

wj + maxSk∪{tk}

X

j∈Sk

wj + (Γ − ⌊Γ ⌋)wtk

ff.

We also have

maxw∈R(Γ )

Nhx1 − h

NX

k=1

akwk

ff=

Nhx1 − hNX

k=1

akwk + h maxSN∪{tN}

X

j∈SN

ajwj + (Γ − ⌊Γ ⌋)atN wtN

ff.

Hence (35) becomes

zR(Γ ) = minimize z (36)

Page 24: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

24 Seong-Cheol Kang et al.

subject tokX

j=1

uj ≥ −x1 +kX

j=1

(wj + wj), k = 1, . . . , ⌊Γ ⌋ ,

kX

j=1

uj ≥ −x1 +kX

j=1

wj + maxSk∪{tk}

X

j∈Sk

wj + (Γ − ⌊Γ ⌋)wtk

ff,

k = ⌊Γ ⌋ + 1, . . . , N,

z −NX

k=1

(c + hak)uk ≥ Nhx1 − hNX

k=1

akwk,

+ h maxSN∪{tN}

X

j∈SN

ajwj + (Γ − ⌊Γ ⌋)atN wtN

ff,

uk ≥ 0, k = 1, . . . , N.

We refer to (36) as the robust inventory control problem.

Let us characterize the optimal ordering quantities of the robust inventorycontrol problem. To make the exposition simple, define

M(k) , maxSk∪{tk}

˘ X

j∈Sk

wj + (Γ − ⌊Γ ⌋)wtk

¯, k = ⌊Γ ⌋ + 1, . . . , N

,

A(N) , maxSN∪{tN}

˘ X

j∈SN

ajwj + (Γ − ⌊Γ ⌋)atN wtN

¯.

Eliminating the auxiliary variable z from (36), we obtain

minimizeNX

k=1

(c + hak)uk + Nhx1 − h

NX

k=1

akwk + hA(N) (37)

subject tokX

j=1

uj ≥ −x1 +kX

j=1

(wj + wj), k = 1, . . . , ⌊Γ ⌋ ,

kX

j=1

uj ≥ −x1 +kX

j=1

wj + M(k), k = ⌊Γ ⌋ + 1, . . . , N,

uk ≥ 0, k = 1, . . . , N.

Define the constant demands ewk such that

kX

j=1

ewj =

(Pkj=1(wj + wj), k = 1, . . . , ⌊Γ ⌋ ,

Pkj=1 wj + M(k), k = ⌊Γ ⌋ + 1, . . . , N,

from which we obtain

ewk =

8>><>>:

wk + wk if k = 1, . . . , ⌊Γ ⌋,

w⌊Γ⌋+1 + M(⌊Γ ⌋ + 1) −P⌊Γ⌋

j=1 wj if k = ⌊Γ ⌋ + 1,

wk + M(k) − M(k − 1) if k = ⌊Γ ⌋ + 2, . . . , N.

(38)

Page 25: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 25

Rewriting (37) in term of the demands ewk, we have

minimizeNX

k=1

(c + hak)uk + Nhx1 − hNX

k=1

ak ewk + C

subject tokX

j=1

uj ≥ −x1 +kX

j=1

ewj , k = 1, . . . , N,

uk ≥ 0, k = 1, . . . , N,

where

C = h

⌊Γ⌋X

k=1

(⌊Γ ⌋ − k + 1)wk + h

N−1X

k=⌊Γ⌋+1

M(k) + aNhM(N) + hA(N).

Since C is a constant, it can be ignored in minimization. Introducing the auxiliaryvariable z for the objective function minus C, we have

minimize z (39)

subject tokX

j=1

uj ≥ −x1 +kX

j=1

ewj , k = 1, . . . , N,

z −NX

k=1

(c + hak)uk ≥ Nhx1 − h

NX

k=1

ak ewk,

uk ≥ 0, k = 1, . . . , N.

Theorem 5 Assume that x1 <PN

k=1 ewk, where ewk are given in (38). The optimal

ordering quantities u∗k of the robust inventory control problem (36) correspond to the

base-stock policy

u∗k =

(ewk − xk if xk < ewk,

0 otherwise.(40)

The cost of this optimal policy is c`PN

k=1 ewk − x1

´+ h

PLk=1

`x1 −

Pkj=1 ewj

´+ C,

where L = max˘k | x1 −

Pkj=1 ewj ≥ 0

¯and C is given above.

Proof: From (40), u∗k are recursively determined as

u∗k =

8><>:

0, if k = 1, . . . , L,

−x1 +PL+1

j=1 ewj , if k = L + 1,

ewk, if k = L + 2, . . . , N.

(41)

Let z∗ =PN

k=1(c + hak)u∗k + Nhx1 − h

PNk=1 ak ewk. Then (u∗, z∗) is a feasible

solution of (39). The dual of (39) is

maximizeNX

k=1

λk

„−x1 +

kX

j=1

ewj

«+ µ

„Nhx1 − h

NX

k=1

ak ewk

«

Page 26: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

26 Seong-Cheol Kang et al.

subject toNX

j=k

λj ≤ µ(c + hak), k = 1, . . . , N,

µ = 1, λk ≥ 0, k = 1, . . . , N,

where λk are the dual variables for the first N constraints of (39) and µ for the nextconstraint. Consider a dual feasible solution (λ∗, µ∗) that satisfies the following setof equations:

µ∗ = 1, λ∗k = 0, k = 1, . . . , L,

NX

j=k

λ∗j = µ∗(c + hak), k = L + 1, . . . , N,

from which we obtain

λ∗k =

8><>:

0, if k = 1, . . . , L,

h, if k = L + 1, . . . , N − 1,

c + h, if k = N.

It is easy to verify that the primal-dual feasible solution pair, (u∗, z∗) and (λ∗, µ∗),satisfies the complementary slackness conditions. Therefore (u∗, z∗) is an optimalsolution of (39), which establishes the optimality of the policy (40). The cost of theoptimal policy is equal to z∗ + C =

PNk=1(c + hak)u∗

k + Nhx1 − hPN

k=1 ak ewk + C.Using (41) and after some algebra, we obtain the desired result. ⊓⊔

We now derive an LP formulation equivalent to the robust inventory controlproblem (36). First, observe that for each k = ⌊Γ ⌋ + 1, . . . , N ,maxSk∪{tk}

˘Pj∈Sk

wj + (Γ − ⌊Γ ⌋)wtk

¯equals to the optimal objective value of

the following LP problem whose feasible set is non-empty and bounded:

maximizekX

j=1

wjykj (42)

subject tokX

j=1

ykj ≤ Γ,

0 ≤ ykj ≤ 1, j = 1, . . . , k.

The dual of (42) is

minimize Γpk +kX

j=1

qkj (43)

subject to pk + qkj ≥ wj , j = 1, . . . , k

pk, qkj ≥ 0, j = 1, . . . , k,

where pk is the dual variable for the constraintPk

j=1 ykj ≤ Γ and qkj the dualvariables for the constraints ykj ≤ 1, j = 1, . . . , k. It can also be seen thatmaxSN∪{tN}

˘Pj∈SN

ajwj + (Γ − ⌊Γ ⌋)atN wtN

¯is equal to the optimal objective

Page 27: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 27

value of the following LP problem whose feasible set is also non-empty and boun-ded:

maximizeNX

k=1

akwkvk (44)

subject toNX

k=1

vk ≤ Γ,

0 ≤ vk ≤ 1, k = 1, . . . , N.

The dual of (44) is

minimize Γr +NX

k=1

sk (45)

subject to r + sk ≥ akwk, k = 1, . . . , N

r, sk ≥ 0, k = 1, . . . , N,

where r and sk are the dual variables. The next Proposition provides the LPformulation of the robust inventory control problem (36). The proof is similar tothat of Theorem 1 and it is given in Appendix C.

Proposition 1 The robust inventory control problem (36) is equivalent to the LP for-

mulation

zR(Γ ) = minimize z (46)

subject tokX

j=1

uj ≥ −x1 +kX

j=1

(wj + wj), k = 1, . . . , ⌊Γ ⌋ ,

kX

j=1

uj ≥ −x1 +kX

j=1

wj + Γpk +kX

j=1

qkj , k = ⌊Γ ⌋ + 1, . . . , N,

pk + qkj ≥ wj , k = ⌊Γ ⌋ + 1, . . . , N, j = 1, . . . , k,

z −NX

k=1

(c + hak)uk ≥ Nhx1 − h

NX

k=1

akwk + h

„Γr +

NX

k=1

sk

«,

r + sk ≥ akwk, k = 1, . . . , N,

pk, qkj ≥ 0, k = ⌊Γ ⌋ + 1, . . . , N, j = 1, . . . , k,

r, sk ≥ 0, k = 1, . . . , N,

uk ≥ 0, k = 1, . . . , N.

3.3 Bounds on the QoS Constraint Violation Probability

Assume Γ > 0. Let (u∗, z∗) be an optimal solution of the robust inventory controlproblem (36), which is obtained by solving (46). We examine the probability that

u∗ violates the QoS constraintsPk

j=1 uj ≥ −x1 +Pk

j=1 wj , k = 1, . . . , N .

Page 28: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

28 Seong-Cheol Kang et al.

Clearly, none of the constraints for k = 1, . . . , ⌊Γ ⌋ are violated by u∗ because Γ

provides full protection for those constraints. For k = ⌊Γ ⌋+1, . . . , N , the followingtheorem provides upper bounds on the violation probability of the kth period QoSconstraint. The proofs of the three parts are similar to those of Theorem 2(a),Corollary 2, and Theorem 3(b), respectively; we omit the details in the interest ofspace. Let zk = (wk − wk)/wk, k = 1, . . . , N , and let Λzk(θ) , logE[eθzk ] denotethe logarithmic moment generating function of zk.

Proposition 2 (a) Let Assumption B be in effect. Then for k = ⌊Γ ⌋ + 1, . . . , N

P

» kX

j=1

u∗j < −x1 +

kX

j=1

wj

–≤ exp

„− sup

θ≥0

θΓ −

kX

j=1

Λzj (θ)

ff«. (47)

(b) Let Ck(u∗) =Pk

j=1 u∗j + x1 −

Pkj=1 wj . Then for k = ⌊Γ ⌋ + 1, . . . , N

P

» kX

j=1

u∗j < −x1 +

kX

j=1

wj

–≤ exp

„− sup

θ≥0

θCk(u∗) −

kX

j=1

Λzj (θwj)

ff«. (48)

(c) Under Assumption B, Bound (48) ≤ Bound (47).

Under Assumption B, Bound (10) developed in [BS04] can be tailored to theinventory control problem as follows: for k = ⌊Γ ⌋ + 1, . . . , N

P

» kX

j=1

u∗j < −x1 +

kX

j=1

wj

–≤ exp

„−

Γ 2

2k

«. (49)

By applying Theorem 2(b), one can show that Bound (47) < Bound (49).Bound (47) depends on the probability distributions of wk’s, but not on u∗.

Bound (48), on the other hand, relies on both the probability distributions and u∗.Moreover, Bound (48) does not require the symmetry assumption, Assumption B.The following two lemmas establish some properties of these bounds.

Lemma 4 Given Γ , Bound (47) is a nondecreasing function of k.

Proof: Since E[zj ] = 0, Jensen’s inequality yields E[eθzj ] ≥ exp`E[θzj ]

´= 1, from

which we have Λzj (θ) ≥ 0. Let k′ > k. Then

θΓ −k′X

j=1

Λzj (θ) = θΓ −kX

j=1

Λzj (θ) −k′X

j=k+1

Λzj (θ) ≤ θΓ −kX

j=1

Λzj (θ).

Hence

supθ≥0

θΓ −

k′X

j=1

Λzj (θ)

ff≤ sup

θ≥0

θΓ −

kX

j=1

Λzj (θ)

ff

and the lemma follows. ⊓⊔

Lemma 5 Given k, Bounds (47) and (48) are nonincreasing functions of Γ .

Page 29: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 29

Proof: It is immediate that Bound (47) is a nonincreasing function of Γ . To showthe monotonicity of Bound (48), consider Γ and Γ ′, where Γ < Γ ′ < k. Let (u∗, z∗)and (u′, z′) be optimal solutions of the robust inventory control problem (36)associated with Γ and Γ ′, respectively. Let S∗

k∪{t∗k} = arg maxSk∪{tk}

˘Pj∈Sk

wj+

(Γ − ⌊Γ ⌋)wtk

¯, and define S′

k, t′k similarly. We need to consider three cases.

First, suppose x1 ≤Pk

j=1 wj +P

j∈S∗k

wj + (Γ − ⌊Γ ⌋)wt∗k. Then at optimality

the kth period QoS constraint of (36) will be binding for both Γ and Γ ′, i.e.,

kX

j=1

u∗j = −x1 +

kX

j=1

wj +X

j∈S∗k

wj + (Γ − ⌊Γ ⌋)wt∗k,

kX

j=1

u′j = −x1 +

kX

j=1

wj +X

j∈S′k

wj + (Γ ′ −¨Γ ′˝)wt′

k.

SinceP

j∈S∗k

wj + (Γ − ⌊Γ ⌋)wt∗k

<P

j∈S′k

wj + (Γ ′ −¨Γ ′˝)wt′

k, we have Ck(u∗) <

Ck(u′).

Second, supposePk

j=1 wj +P

j∈S∗k

wj + (Γ − ⌊Γ ⌋)wt∗k

< x1 ≤Pk

j=1 wj +P

j∈S′k

wj + (Γ ′ −¨Γ ′˝)wt′

k. Then the kth period QoS constraint will be bind-

ing at optimality for Γ ′. However, the constraint will not be binding at optimalityfor Γ , resulting in

Pkj=1 u∗

j = 0. This implies Ck(u∗) ≤ Ck(u′).

Third, suppose x1 >Pk

j=1 wj +P

j∈S′k

wj + (Γ ′ −¨Γ ′˝)wt′

k. In this case, the

kth period QoS constraint will not be binding at optimality for Γ as well as forΓ ′. Hence

Pkj=1 u∗

j =Pk

j=1 u′j = 0, and we have Ck(u∗) = Ck(u′).

Therefore Ck(u∗) is not greater than Ck(u′), from which the lemma follows.⊓⊔

We next consider the inverse problem (cf. Section 2.4) for the inventory controlproblem: find a Γ such that

maximize zR(Γ ) (50)

subject to Bound (48) ≤ ǫk, k = 1, . . . , N,

where ǫk is the maximum allowable violation probability for the kth period QoSconstraint. (Note that we are using the solution-dependent bound (48) in theinverse problem. Refer to the related discussion in Section 2.4.) An optimal (i.e.,smallest possible) Γ , denoted by ΓS , can be found as follows: Assume that theprobability distributions of wk’s are symmetric. (We need this assumption in orderto use Bound (47) during the process.) First for each k = 1, . . . , N , we computethe minimum Γ , denoted by Γk, for which Bound (47) ≤ ǫk. Note that Γk canbe obtained through binary search over the interval [0, k], because Bound (47) ismonotone in Γ . Let ΓD = maxk Γk. Clearly, ΓS lies in the interval [0, ΓD] dueto Proposition 2(c). Suppose we choose some Γ ′ ∈ [0, ΓD]. We solve (46) with Γ ′

and then compute Bound (48) for each k, denoted by pk. If pk ≤ ǫk for all k, ΓS

lies in [0, Γ ′] because Bound (48) is also monotone in Γ . Otherwise, ΓS belongs to[Γ ′, ΓD]. This implies that ΓS can be found by binary search.

The inverse problem is formulated at the strategic level in the sense that oncethe optimal ΓS is found, the robust ordering quantities u∗

k, k = 1, . . . , N , are

Page 30: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

30 Seong-Cheol Kang et al.

determined by (46) before any of the demands are observed. The framework, how-ever, can be easily extended to handle operational level inventory control deci-sions; namely, the ordering decision u∗

k can be made after the previous demandsw1, . . . , wk−1 have been observed. To that end, one can use the inverse problemin a receding horizon manner: First solve the inverse problem and order u∗

1 only.Observe w1 and compute x2. With x2 being the initial inventory, one now hasan inventory control problem of N − 1 periods. Solve the inverse problem for thisN−1 period inventory control problem and order u∗

2 only. Observe w2 and computex3. And repeat the process. The ordering decisions of the operational inventorycontrol problem are certainly less conservative than those of the strategic counter-part since the former problem allows the decision maker to use more information.Other than the receding horizon approach, an alternative can be to consider adap-tive policies but with a simple linear structure; see Ben-Tal et al. [BTGGN04], Seeand Sim [SS10], Bertimas et al. [BIP11], and Bertsimas and Goyal [BG12].

3.4 Numerical Results

We set N = 30, c = 2, h = 1, and x1 = 0. Let w = (w1, . . . , wN ) and w =(w1, . . . , wN ). For each k, wk is randomly drawn from the range [30, 300]; for eachk, wk is randomly chosen from the range [1, wk − 1]. In this manner, we generate20 instances of (w, w). We consider three symmetric probability distributions forwk: triangle, uniform, and reverse-triangle. (For the triangle and reverse-triangledistributions, see Section 2.5.) For simplicity, we assume that all wk have thesame distribution. We further assume that the kth period QoS constraint violationprobability is required to be no more than 0.05 for all k.

Let ΓD and ΓB denote the minimum values of Γ for which Bound (47) andBound (49) are less than or equal to 0.05 for all k, respectively. Since both boundsare nondecreasing functions of k for a given Γ , it suffices to consider the last periodto determine ΓD and ΓB . ΓD depends on the probability distributions of wk’s;computations (binary search) yield that ΓD is equal to 5.45, 7.68, and 9.38 forthe triangle, uniform, and reverse-triangle distributions, respectively. ΓB equalsto 13.42 regardless of probability distributions. Let ΓS be the optimal Γ of theinverse problem (50).

For each instance of (w, w) and probability distribution, we obtain the opti-mal costs of the robust inventory control problem (36), zR(ΓB) and zR(ΓD). Wealso determine ΓS and compute zR(ΓS). In Table 4, we list the average zR(ΓB),zR(ΓD), ΓS , and zR(ΓS) over the 20 instances of (w, w). It can be seen that thestronger bounds we have derived using distributional information lead to signifi-cant cost savings (up to 54% in these examples) over the distribution-independent

robust approach of [BS04]. For comparison purposes, we also computed the opti-mal cost and QoS constraint violation probability if one solves the nominal (ratherthan the robust) problem. It turns out, that the corresponding violation probabil-ity is quite high, namely, about 0.5 with either distribution. Of course, the optimalcost in that case is substantially lower than the robust cost.

To assess numerically the tightness of our best bound we computed the valueof Γ , say Γ ∗, that is required for the actual QoS constraint violation probabilityto remain below 0.05. To that end, we used simulation to estimate the violationprobability. We found that the cost zR(Γ ∗) corresponding to Γ ∗ is no more than

Page 31: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 31

16% lower than zR(ΓS), suggesting that our bound is reasonably tight. We alsonote that we performed computations for other values of the problem parametersas well (e.g., for a holding cost of h = 0.5) and the qualitative conclusions remainthe same.

Table 4 Robust inventory control cost zR(Γ ) and Γ S .

Distribution zR(Γ B) zR(Γ D) Γ S zR(Γ S)zR(Γ D)−zR(Γ B)

zR(Γ B)× 100%

zR(Γ S)−zR(Γ B)

zR(Γ B)× 100%

Triangle 75847.83 51867.60 2.55 35192.38 −31.6% −53.6%Uniform 75847.83 60975.82 3.81 43321.65 −19.6% −42.9%

Reverse-triangle 75847.83 66466.91 4.85 49024.66 −12.4% −35.4%

Let u∗B , u∗

D, and u∗S be optimal ordering solutions of the robust inventory

control problem associated with ΓB , ΓD, and ΓS , respectively. For each instanceof (w, w) and probability distribution, we simulate the inventory system with u∗

B ,u∗

D, and u∗S . Each simulation consists of 100,000 runs, and the cost is averaged

over the 100,000 runs. In Table 5, we report the costs further averaged over the20 instances of (w, w).

Table 5 Simulated inventory control costs.

Distribution c(u∗B) c(u∗

D) c(u∗S)

c(u∗D)−c(u∗

B)

c(u∗B

)× 100%

c(u∗S)−c(u∗

B)

c(u∗B

)× 100%

Triangle 45638.19 33058.74 23736.86 −27.6% −48.0%Uniform 45871.10 38228.70 28570.15 −16.7% −37.7%

Reverse-triangle 46106.14 41360.53 31971.06 −10.3% −30.7%

During the 100,000 simulation runs, we also count for each period the numberof times that shortage occurs. Dividing the frequency of shortage of period k

by 100,000, we obtain the empirical stockout probability for period k, i.e., theempirical kth period QoS constraint violation probability. Let pk(u∗) denote theempirical QoS constraint violation probability for period k. In the interest of space,we report in Table 6 pN (u∗) only. We note that we computationally confirmed thatthe QoS constraint violation probabilities are smaller for smaller values of k. Thisis due to the fact that the effects of the uncertain demand are compounded ask increases. In the table, “avg” denotes the average value of pN (u∗) over the 20instances of (w, w). We interpret “min” and “max” accordingly. The insight fromTable 6 is that even u∗

S is an overly conservative ordering solution, let alone u∗B and

u∗D; the empirical QoS constraint violation probabilities are well below 0.05. This

raises the question of how to further improve Bound (48), but in the meantimeit demonstrates the necessity of using Bound (48) rather than Bound (47) andBound (49) whenever possible.

4 Conclusion

In this paper we considered an LP problem in which each element of the constraintmatrix is subject to uncertainty. To obtain a less conservative solution with a cer-tain probabilistic guarantee of feasibility, we constructed the robust formulation,

Page 32: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

32 Seong-Cheol Kang et al.

Table 6 Empirical QoS constraint violation probabilities.

pN (u∗B) pN (u∗

D) pN (u∗S)

Distributionavg min max avg min max avg min max

Triangle 0 0 0 .0000015 0 .00002 .00712 .00454 .00787Uniform 0 0 0 .000007 0 .00004 .00658 .00456 .00730

Reverse-triangle 0 0 0 .0000125 0 .00004 .00633 .00536 .00723

which can be recast as an LP formulation. Using the probability distributions of theuncertain elements, we showed that we can improve the quality of a solution with-out compromising its robustness (quantified as the constraint violation probabilityof the solution). To that end, we derived a new bound on the constraint violationprobability. This bound is distribution-dependent, but is independent of the solu-tion. We showed that the bound is stronger than a distribution-independent boundgiven in [BS04]. We also derived another bound that utilizes the solution as wellas the probability distributions. We proved that this solution-dependent boundis stronger (and in our numerical tests significantly so) than all other bounds wepresented. The case where only limited distributional information on the uncer-tain elements is available was also considered. We derived two bounds that exploitthe first moments, and the first and second moments of the uncertain elements,respectively, and the solution. We showed that these bounds are also stronger thanthe bound given in [BS04].

As an application, we considered a discrete-time stochastic inventory controlproblem with QoS constraints. We constructed a robust formulation and showedthat its optimal ordering quantities form a base-stock policy. We derived twobounds on the probability that the QoS constraints are violated. We explained howthese two bounds can be used together to obtain a better solution of the inventorycontrol problem via the so-called inverse problem. In some of the examples weprovided, cost savings amount to 36%–54%.

In closing, distributional information can be very valuable in a robust optimiza-tion context. Thus, when it is not readily available, one should consider estimatingit because the benefits can significantly outweigh the estimation costs.

Appendix

A Proof of Lemma 2

Proof: Consider Γ1 and Γ2, where Γ1 ≤ Γ2. Since for any x

maxai∈Ri(Γ

1i)

˘

a′ix

¯

≤ maxai∈Ri(Γ

2i)

˘

a′ix

¯

, i = 1, . . . , m,

the feasible region of (5) with Γ1 is no smaller than that of (5) with Γ2. Therefore, zR(Γ) isnonincreasing as Γ increases componentwise. If Γi = 0 for all i, Ri(Γi) = {ai} and consequently(5) is reduced to (2). On the other hand, if Γi = |Ji| for all i, the uncertainty budget constraintbecomes redundant, making Ri(Γi) = Ui. Hence, it follows that zF ≤ zR(Γ) ≤ zN . ⊓⊔

Page 33: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 33

B Reshaping the LP formulation (8) into another LP formulation (9)

Notice that for any i and j ∈ Ji, at most one of the constraints (6a) and (6b) is binding atoptimality. Therefore, the corresponding dual variables satisfy λijµij = 0. Since λij , µij ≥ 0,this implies that λij +µij = |λij −µij |. The similar observation regarding the constraints (6d)and (6e) leads to νijτij = 0 and νij + τij = |νij − τij |. It can also be noted that λij −µij andνij − τij have the same sign: if λij > 0, then νij > 0. (if µij > 0, then τij > 0.) Using theseobservations, we obtain from the constraints (8a)

|λij − µij + νij − τij | = |xj | ⇔ |λij − µij | + |νij − τij | = |xj | (51)

⇔ λij + µij + νij + τij = |xj |, (52)

where the relation in (51) follows from the fact that λij −µij and νij − τij have the same sign.Using (52), the constraints (8b) can be rewritten as

zi − aij(νij + τij) ≥ 0 ⇔ zi + aij(λij + νij) ≥ aij |xj |.

By defining pij , aij(λij +µij) and substituting yj for |xj | with the addition of the constraints−yj ≤ xj ≤ yj and yj ≥ 0, the LP formulation (8) becomes the LP formulation (9).

C Proof of Proposition 1

Proof: Let (u, z, pk, qkj , r, sk) be an optimal solution of (46). Let (u∗, z∗) be an optimalsolution of (36). We will show that (u, z) is feasible to (36) with z = z∗. For k = ⌊Γ ⌋+1, . . . , N ,we have

kX

j=1

uj ≥ −x1 +

kX

j=1

wj + Γ pk +

kX

j=1

qkj

≥ −x1 +k

X

j=1

wj + maxSk∪{tk}

X

j∈Sk

wj + (Γ − ⌊Γ ⌋)wtk

ff

,

where the second inequality follows from (pk, qkj) being feasible to (43) and the weak dualitybetween (43) and (42). We also have

z−N

X

k=1

(c + hak)uk ≥ Nhx1 − h

NX

k=1

akwk + h

Γ r +N

X

k=1

sk

«

≥ Nhx1 − h

NX

k=1

akwk + h maxSN∪{tN}

X

j∈SN

ajwj + (Γ − ⌊Γ ⌋)atNwtN

ff

,

where the second inequality follows from (r, sk) being feasible to (45) and the weak dualitybetween (45) and (44). These inequalities show that (u, z) is a feasible solution of (36). Hencez ≥ z∗.

For k = ⌊Γ ⌋+1, . . . , N , let (p∗k, q∗

kj) be an optimal solution of (43). Then by the strong dual-

ity between (43) and (42), we have Γp∗k

+Pk

j=1 q∗kj

=

maxSk∪{tk}

˘P

j∈Skwj +(Γ −⌊Γ ⌋)wtk

¯

. Let (r∗, s∗k) be an optimal solution of (45). Again the

strong duality between (45) and (44) implies Γr∗ +PN

k=1 s∗k

= maxSN∪{tN}

˘P

j∈SNajwj +

(Γ − ⌊Γ ⌋)atNwtN

¯

. From these equalities, it can be seen that (u∗, z∗, p∗k, q∗

kj, r∗, s∗

k) is a

feasible solution of (46), from which we have z∗ ≥ z. ⊓⊔

Page 34: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

34 Seong-Cheol Kang et al.

References

[BBC11] D. Bertsimas, D. Brown, and C. Caramanis, Theory and Applications of Robust

Optimization, SIAM review 53 (2011), no. 3, 464–501.[Ben62] G. Bennett, Probability inequalities for the sum of independent random variables,

Journal of the American Statistical Association 57 (1962), 33–45.[BG12] D. Bertsimas and V. Goyal, On the power and limitations of affine policies in

two-stage adaptive optimization, Mathematical programming 134 (2012), no. 2,491–531.

[BIP11] D. Bertsimas, D. A. Iancu, and P. A. Parrilo, A hierarchy of near-optimal policies

for multistage adaptive optimization, IEEE Trans. Automat. Contr. 56 (2011),no. 12, 2809–2824.

[BL97] J. R. Birge and F. Louveaux, Introduction to stochastic programming, Springer,New York, 1997.

[BO08] D. Bienstock and N. Ozbay, Computing robust basestock levels, Discrete Opti-mization 5 (2008), no. 2, 389–414.

[BP01] D. Bertsimas and I. Ch. Paschalidis, Probabilistic service level guarantees in

make-to-stock manufacturing systems, Operations Research 49 (2001), no. 1, 119–133.

[BPS04] D. Bertsimas, D. Pachamanova, and M. Sim, Robust linear optimization under

general norms, Operations Research Letters 32 (2004), no. 6, 510–516.[BS04] D. Bertsimas and M. Sim, The price of robustness, Operations Research 52

(2004), no. 1, 35–53.[BT06] D. Bertsimas and A. Thiele, A robust optimization approach to inventory theory,

Operations Research 54 (2006), no. 1, 150–168.[BTGGN04] A. Ben-Tal, A. Goryashko, E. Guslitzer, and A. Nemirovski, Adjustable robust

solutions of uncertain linear programs, Mathematical Programming 99 (2004),no. 2, 351–376.

[BTGN09] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, Robust optimization, Princeton,2009.

[BTN99] A. Ben-Tal and A. Nemirovski, Robust solutions of uncertain linear programs,Operations Research Letters 25 (1999), no. 1, 1–13.

[CG06] G. Calafiore and L. El Ghaoui, Distributionally robust chance-constrained linear

programs with applications, J. Optimization Th. Applicat. (2006).[CS60] A. J. Clark and H. E. Scarf, Optimal policies for a multi-echelon inventory prob-

lem, Management Science 6 (1960), 475–490.[CSS07] X. Chen, M. Sim, and P. Sun, A robust optimization perspective on stochastic

programming, Operations Research 55 (2007), no. 6, 1058–1071.[CSST10] W. Chen, M. Sim, J. Sun, and C.-P. Teo, From CVaR to uncertainty set: Impli-

cations in joint chance-constrained optimization, Operations research 58 (2010),no. 2, 470–485.

[DDB95] M. A. Dahleh and I. J. Diaz-Bobillo, Control of uncertain systems: A linear

programming approach, Prentice Hall, 1995.[DVP06] C. Del Vecchio and I. Ch. Paschalidis, Enforcing service-level constraints in supply

chains with assembly operations, IEEE Trans. Automat. Contr. 51 (2006), no. 12,2000–2005.

[DZ98] A. Dembo and O. Zeitouni, Large deviations techniques and applications, 2nd ed.,Springer-Verlag, NY, 1998.

[KW94] P. Kall and S. W. Wallace, Stochastic programming, John Wiley & Sons, Chich-ester, UK, 1994.

[NS06] A. Nemirovski and A. Shapiro, Convex approximations of chance constrained

programs, SIAM Journal on Optimization 17 (2006), no. 4, 969–996.[PL03] I. Ch. Paschalidis and Y. Liu, Large deviations-based asymptotics for inventory

control in supply chains, Operations Research 51 (2003), no. 3, 437–460.[PLCP04] I. Ch. Paschalidis, Y. Liu, C. G. Cassandras, and C. Panayiotou, Inventory con-

trol for supply chains with service level constraints: A synergy between large

deviations and perturbation analysis, The Annals of Operations Research 126

(2004), 231–258, (Special Volume on Stochastic Models of Production-InventorySystems).

[Soy73] A. L. Soyster, Convex programming with set-inclusive constraints and applica-

tions to inexact linear programming, Operations Research 21 (1973), 1154–1157.

Page 35: Distribution-dependent robust linear optimization with applications …sites.bu.edu/paschalidis/files/2015/06/robust-LP-anor-final.pdf · Distribution-dependent robust linear optimization

Robust linear optimization with applications to inventory control 35

[SS10] C.-T. See and M. Sim, Robust approximation to multiperiod inventory manage-

ment, Operations research 58 (2010), no. 3, 583–594.[ZKR13] S. Zymler, D. Kuhn, and B. Rustem, Distributionally robust joint chance con-

straints with second-order moment information, Mathematical Programming 137

(2013), no. 1-2, 167–198.