The overall theme of this thesis is optimization under data uncertainty with a robust optimization approach. Robust optimization, as introduced by [17] and [65], assumes a non stochastic nature of uncertainty or that the probability distribution of the uncertainty data is unknown. It models the uncertainty data as belonging to a so called uncertainty set. Within robust optimization a feasible solution is defined as a solution that will remain feasible for all possible realizations of data and the objective is defined by the worstcase performance.
Robust optimization has emerged as a successful discipline because of the general tractability of its formulations, specially when compared to stochastic optimization, another approach to deal with data uncertainty that assumes a stochastic nature of uncertainty and known or partially known probability distributions. On the other hand, the pessimist worstcase approach of robust optimization yields to solutions that can be too conservative in practice, meaning that it will perform best if worstcase scenario happens, but perform bad otherwise. This conservatism has been characterized under the expression “price of robustness” in the seminal work in [31], in the sense that there is a trade off between feasibility and optimality.
Uncertainty sets
The modeling of the uncertainty sets as a way to reduce conservatism while maintaining tractability was object of the research that followed. In this subsection we consider uncertainty restricted to the coefficients of matrix A. In [18] and [17] the authors introduced rowwise ellipsoidal uncertainty sets that turned linear programming problems into secondorder cone problems and reduced the conservatism of the approach in [114]. Besides the tractability of its robust counterpart, they introduce ellipsoidal uncertainty sets for the following reasons.
• Ellipsoidal uncertainty sets form a relatively wide family including polytopes which can be modeled as the intersection of degenerated ellipsoids.
• An ellipsoid is given parametrically by data of moderate size, hence it is convenient to represent “ellipsoidal uncertainty” as input.
In fact, since worstcase chance constraints as described in (2.5) can be nonconvex, many authors develop safe convex approximations S as conservative approximations of these constraints. A safe convex approximation S is a system of convex constraints on x and additional variables v such that the x component of every feasible solution (x, v) of S is feasible for the chance constraint. By replacing the chance constraints in a given chance constrained optimization problem with their safe convex approximations, we end up with a convex approximation problem in x and other variables. The idea is to arrive to safe convex approximations that are computationally tractable.
To that end, the authors in [31] investigated the special case where the uncertainty set is a polyhedron. Here each uncertain element of the constraint matrix A is modeled as an independent bounded random variable. They assumed that the probability distributions of the uncertain elements are unknown except that they are symmetric. They used a set of parameters that “restrict” the variability of the uncertain elements. This approach was a breakthrough because it preserves the degree of complexity of the problem (the robust counterpart of a linear problem is linear). The difference between the objective values of the nominal formulation and their robust formulation was termed the price of robustness. For a probabilistic guarantee of feasibility of an optimal solution of their robust formulation, they established upper bounds on the constraint violation probability of the solution.
Further studies on polyhedral uncertainty sets have identified other opportunities to reduce conservatism. In [39] they verify that the single deviation band for each coefficient proposed in [31] may be too limitative in practice, so that getting a higher resolution by partitioning the band into multiple subbands seems advisable. They define for each uncertainty coefficient a partitioning into sub bands and, for each sub band, maximum and minimum values of the parameter under uncertainty.
In [97], the author introduces the concept of decision dependent uncertainty as an extension of the work presented in [31]. He identifies that the uncertainty set proposed in [31] suffers from a practical drawback since they are independent from the value of decision variables, so that some decision vectors, for instance in combinatorial problems with few nonzero components, are more protected than others. He proposes a new model where the uncertain parameters belong to the image of multifunctions of the problem decision variables. It can provide the same probabilistic guarantee as the uncertainty set defined in [31] while being less conservative. The feasibility set of the resulting optimization problem is in general nonconvex so that a mixedinteger programming reformulation for the problem is proposed for uncertainty sets with affine dependency on decision variables.
Other authors explored safe convex approximations to develop new methods to create less conservative uncertainty sets, leveraging the existence of historical random data.
A prominent and practical issue of uncertainty setinduced robust optimization is how to determine the set coefficients appropriately, which are in general assumed as known according to domainspecific knowledge. In the absence of firstprinciple knowledge, leveraging available historical data provides a practical way to characterize the distributional information. In fact the true information available to the decision maker is historical data, which must be incorporated into an uncertainty set before the robust optimization approach can be implemented.
Adaptive robust optimization
As already seen, classical robust optimization problems are modeled under the assumption that all variables are “hereandnow” decision variables. This means that they are defined before any realization of the uncertainty set, and therefore lead to conservative solutions. In many cases, however, decision variables can be defined only after realization of uncertainty. This is the case, for instance, when decisions are made subsequently. Defining a set of variables as “waitandsee” variables, where they are defined only after all or part of the uncertainty is revealed, can lead to less conservative solutions. The reduction in conservatism is because it yields more flexible decisions that can be adjusted according to the realized portion of the uncertain data.
A decision variable that can incorporate information already revealed by uncertainty is also called adaptive or adjustable decision variable and the actual function that maps the revealed information to the action that is implemented is referred to as a decision rule. The resulting problem is named adaptive or adjustable robust optimization (ARO) problem and the concept was first introduced to robust optimization in [19]. Adaptive robust optimization has many reallife applications. For surveys of the theme one should read [126] and [52].
In [26] it is shown that, for continuous linear optimization problems, the gap between the adjustable and classical robust solutions are due to the fact that the classical robust formulation is inherently unable to model nonconstraintwise uncertainty and replaces any given uncertainty set U, with a potentially much larger uncertainty set. The adjustable robust optimization solution is, in fact, at least as good as the equivalent classical robust optimization solution. Different works have shown that, for some classes of problems, the optimal solution of classical robust optimization equals the solution of the adaptive variant.

Table des matières
1 Introduction
2 Theory review
2.1 Robust optimization
2.2 Uncertainty sets
2.3 Adaptive robust optimization
2.3.1 Exact solutions
2.3.2 Affine decision rules
2.3.3 Finite adaptability
2.3.4 Nonlinear decision rules
2.4 Objective function
3 Solving the bifurcated and nonbifurcated robust network loading problem with kadaptive routing
3.1 Introduction
3.2 Literature review
3.2.1 Polyhedral uncertainty set
3.2.2 Routing
3.3 Problem definition
3.4 Iterative nested partitioning
3.5 Algorithm improvements
3.6 Implementation and results
3.6.1 Summary of results
3.6.2 Instances and implementation
3.6.3 Results
4 Exact Solution Algorithms for Minimizing the Total Tardiness with Processing Time Uncertainty
4.1 Introduction
4.2 MILP formulations
4.2.1 Disjunctive constraints formulation
4.2.2 Sequenceposition formulation
4.2.3 Linear ordering formulation
4.2.4 Rowandcolumn generation algorithm
4.3 Branchandbound
4.3.1 Branching
4.3.2 Node selection
4.3.3 Bound and pruning
4.4 Worstcase evaluation
4.4.1 Dynamic programming
4.4.2 Heuristic
4.5 Dominance rules
4.6 Implementation and results
4.6.1 Implementation details
4.6.2 Comparative performance of the algorithms
4.6.3 Assessing the robustness
5 Minmaxmin robustness for combinatorial problems with polyhedral uncertainty
5.1 Introduction
5.2 Formulations
5.2.1 Compact formulation
5.2.2 Extended formulation
5.3 Rowandcolumn generation algorithm
5.3.1 Compact Formulation
5.3.2 Extended Formulation
5.4 Local search heuristic
5.5 Algorithmic implementations
5.5.1 Compact formulation
5.5.2 Extended formulation
5.6 Computational experiments
5.6.1 Results
6 Distributionally robust fleet assignment problem
6.1 Introduction
6.2 Fleet assignment formulations
6.3 Twostage distributional reformulation
6.3.1 Dimensionality reduction
6.3.2 Ambiguity set and firstorder deviation moment functions
6.3.3 Affine decision rules
6.4 Datadriven ambiguity set
6.4.1 Support Set
6.4.2 Momentbased functions
6.5 Implementation and Results
6.5.1 Implementation details
6.5.2 Comparative performance of the formulations
7 Conclusions
7.1 Robust network loading problem
7.2 Robust scheduling problem
7.3 Minmaxmin robustness
7.4 Distributionally robust fleet assignment problem
Bibliography