Abstract
This paper describes the computational challenge developed for a computational competition held in 2023 for the \(20{\text {th}}\) anniversary of the Mixed Integer Programming Workshop. The topic of this competition was reoptimization, also known as warm starting, of mixed integer linear optimization problems after slight changes to the input data for a common formulation. The challenge was to accelerate the proof of optimality of the modified instances by leveraging the information from the solving processes of previously solved instances, all while creating high-quality primal solutions. Specifically, we discuss the competition’s format, the creation of public and hidden datasets, and the evaluation criteria. Our goal is to establish a methodology for the generation of benchmark instances and an evaluation framework, along with benchmark datasets, to foster future research on reoptimization of mixed integer linear optimization problems.
Avoid common mistakes on your manuscript.
1 Introduction
The Mixed Integer Programming Workshop [3] is an annual single-track workshop highlighting the latest trends in mixed integer optimization and discrete optimization. Since the computational development of optimization tools is a crucial component within the mixed integer linear optimization community, the workshop established a computational competition in 2022 to encourage and provide recognition to the development of novel practical techniques in the software for solving mixed integer linear optimization problems (MILPs). The first edition of the competition focused on primal heuristics, i.e., finding good quality primal solutions of general MILPs selected mainly from the MIPLIB 2017 benchmark library [19]. This paper discusses the second edition of the competition held in 2023 [1]. From an organizational point of view, this edition involved not only the development of the competition topic, its structure, and an evaluation framework but also the creation of new benchmarking instances.
Traditional benchmarks for MILPs often focus on the performance of optimizing a given instance from scratch. In many practically relevant settings, however, MILP solvers are used to repeatedly solve a series of similar instances of the same type and only slight modifications of the input data. This motivates the 2023 competition topic: the development of effective techniques for reoptimization, also known as warm starting, of MILPs in this setting.
In addition to the use case where practitioners solve the same MILP model with slightly perturbed input data, this setting also appears algorithmically:
-
Row generation algorithms, for instance based on generalized Benders’ decompositions for single- and multilevel MILPs (e.g., bilevel optimization problems), solve MILP subproblems that vary only in the right-hand side vector across iterations, see, e.g., [12].
-
Column generation algorithms, for instance based on a Dantzig-Wolfe decomposition for MILPs, solve subproblems (pricing problems) that vary only in the objective vector across iterations, see, e.g., [30].
-
Scenario decomposition algorithms for stochastic MILPs solve subproblems that vary only in the scenario-dependent components both within and across iterations, see, e.g., [20].
-
Primal heuristics, such as diving and neighborhood heuristics, may solve similar MILPs with varying input data both within and across calls to the heuristics, see, e.g., [18].
Despite the broad applicability of reoptimization of MILPs, the research in this area is limited. For example, [14] discusses a reoptimization algorithm for solving the shortest path problem with time windows in a dynamic programming setting, [7, 27] develop frameworks for reoptimization of combinatorial optimization problems and derive specific algorithms for certain classes of such problems, and [25, 30] discuss reoptimization techniques for general MILPs but only when specific components of the MILP vary from one instance to another. Most of the existing literature in this area either deals with specific applications or needs to be more scalable to be applied in practice in the reoptimization settings mentioned above.
This further motivated the competition topic. To evaluate reoptimization for the different settings mentioned above, the competition provided a set of MILP instance series, each of which comprised 50 related instances of the same size. For each instance series, the type of change that may occur and the names of varying columns and/or rows as applicable for this type are known.
The competition participants were asked to provide a general solver to optimize a series of related MILPs in sequential order, thereby reusing information from the previous runs in the series. The participants were free to build on any software that is available in source code and can be used freely for the evaluation of the competition. The intention of this competition was not to perform offline training on different types of applications but to reuse information from current and previous solving processes to accelerate the solution of future instances.
The remainder of the paper has the following structure. Section 2 introduces the competition dataset, mentions the two GitHub repositories in which this dataset and its generation scripts are available, discusses the dataset creation process and the two metrics used for this purpose, and provides detailed explanations of all series of instances. Section 3 discusses the competition’s evaluation criteria and presents a novel scoring function developed to measure the computational performance of the submissions. Finally, Sect. 4 presents the competition results, concluding remarks, and future outlook.
2 The dataset
The competition dataset consists of a set of MILP instance series of 50 related instances each. Each instance series is based on an MILP taken from a specific application or benchmark library in the literature. There are seven public and five hidden instance series with constant constraint matrices and three additional instance series with varying constraint matrices that were not part of the competition evaluation.
The instances in each series comply with the following specifications:
-
the number of constraints, and the number, order, and meaning of variables remain the same across the instances in a series, and
-
some or all of the following input can vary: objective function coefficients, variable bounds, constraint sides, and coefficients of the constraint matrix.
Due to the more challenging nature of constraint coefficient changes, the corresponding three series were not part of the official computational evaluation. They were nonetheless included to see if some proposed approaches were applicable and efficient for these series. Table 1 summarizes all the instance series where LO, UP, OBJ, LHS, RHS, and MAT, denote lower bound vector, upper bound vector, objective function vector, left-hand side vector, right-hand side vector, and the constraint matrix, respectively, and the last column states the time limit imposed to solve one instance in the series.
To the best our knowledge, there is currently no dedicated reference benchmark for the reoptimization of MILPs. We, therefore, created these series of instances based on various algorithms, applications, and existing MILP instances from the literature and made them available at the GitHub repository [1]. Furthermore, the scripts used to generate these series are also available at another GitHub repository [2], offering a template for generating more and longer series for future research beyond the context of the competition.
To build a single series, we generated numerous instances satisfying the series requirements, solved them to optimality from scratch, and selected 50 suitable instances for the series by applying two metrics. First, the time taken to solve an instance to optimality, and second, the similarity between the varying components of the series, whenever applicable. The similarity between two vectors c and \(\bar{c}\) is defined as the cosine of the angle between the vectors, i.e.,
see also [30]. For example, let c and \(\bar{c}\) be the objective function vectors of two instances. If the similarity between these vectors is very high, then the effort required to solve these instances to optimality without reoptimization is likely to be similar. Furthermore, most of the primal information generated in the solving process of the instance with the objective vector c will be valid for the instance with the objective vector \(\bar{c}\) (and vice versa).
We chose the time limit for an instance in a given series based on the individual solving times of the 50 instances in this series when solving from scratch. Then, the scoring function discussed in Sect. 3 penalized any techniques that could not solve an instance to optimality within this time limit. We now discuss the details of each instance series.
bnd_series_1 This series is based on the instance rococoC10-001000 from the MIPLIB 2017 benchmark library [22]. The instances were generated by perturbing the upper bounds of general integer variables selected via a discrete uniform distribution up to ±100% of the bound value.
bnd_series_2 This series is based on the instance csched007 from the MIPLIB 2017 benchmark library [22]. The instances were generated via random fixings of 15% to 25% of the binary variables selected via a discrete uniform distribution w.r.t. the original instance.
bnd_series_3 This series is also based on the instance csched007 from the MIPLIB 2017 benchmark library [22]. The instances were generated via random fixings of 5% to 20% of the binary variables selected via a discrete uniform distribution w.r.t. the original instance. Accordingly, these instances are relatively harder to solve as compared to the instances in the series bnd_series_2 as indicated by the time limits in Table 1.
obj_series_1 This series is based on the stochastic multiple binary knapsack problem and the associated instance set introduced in [6]. The problem is modeled as a two-stage stochastic MILP and has the formulation
where \(Q(x) = \sum \limits _{\omega \in \Omega } p_\omega Q_{\omega }(x)\), with
where \(\omega \in \Omega \) denotes a scenario, \(p_\omega \) denotes probability of the scenario \(\omega \), and the second-stage objective vector \(q_{\omega }\) is random, following a discrete distribution with finitely many scenarios. We adapted the given dataset and generated instances by considering one scenario at a time. This resulted in a series of instances with one-third of the objective vector (corresponding to y variables) varying across instances. Note that changes in the objective vector may also arise when dual decomposition [13] or linearization-based progressive-hedging-like methods [11] are applied to such two-stage stochastic MILPs.
obj_series_2 This series is based on the instance ci-s4 from the MIPLIB 2017 benchmark library [19]. The instances were generated via random perturbations and random rotations of the objective vector.
obj_series_3 This series is based on the UCI Machine Learning repository dataset magic [10]. The instances are subproblems of a column generation algorithm for improving decision trees [17]. The final set of instances were generated based on a submission that was received in response to a public call for additional datasets.
rhs_series_1 This series is based on the stochastic server location problem and the associated dataset proposed in [23]. The problem has the formulation
where \(Q(x) = \sum \limits _{\omega \in \Omega } p_\omega Q_{\omega }(x)\), with
where \(\omega \in \Omega \) denotes a scenario and \(p_\omega \) denotes probability of the scenario \(\omega \). We adapted the given dataset and generated instances by considering 25 scenarios at a time. This resulted in a series of instances with only the right-hand side vector of equality constraints varying across instances.
rhs_series_2 This series is based on a synthetic MILP and the associated dataset proposed in [21]. The problem has the formulation
We adapted the given dataset and generated instances by taking a convex combination of two different RHS vectors.
rhs_series_3 This series is based on the instance glass4 from the MIPLIB 2017 benchmark library [4]. The instances were generated by perturbing the non-negative LHS and RHS vector components selected via a discrete uniform distribution up to \(\pm 70\%\) of their values.
rhs_series_4 This series is also based on the synthetic MILP and the associated dataset proposed in [21]. We adapted the given dataset and generated instances by taking a convex combination of two different RHS vectors (different than the ones used for generating rhs_series_2).
mat_series_1 This series is based on the optimal vaccine allocation problem and the associated dataset proposed in [28]. The problem formulation is
where \(\omega \in \Omega \) denotes a scenario, \(p_\omega \) denotes the probability of the scenario \(\omega \), and M denotes a big-M parameter. We adapted the given dataset and generated instances by considering 500 scenarios at a time, giving a series of instances with varying constraint coefficients in the inequality constraints.
rhs_obj_series_1 This series is based on the hydro unit commitment (HUC) problem modeled as an MILP. Considering a fixed hydro valley, the input data potentially changing is restricted to the electricity prices, the inflows, and the initial and target water volume in the reservoirs. These parameters appear only in the objective function or constraint sides. Thus, reoptimizing this problem is practically interesting because the great majority of the input data remains unchanged. Moreover, utility companies often solve the HUC problem as a subproblem of a decomposition method. Consequently, for converging to the optimal solution of the whole unit commitment problem, a HUC has to be solved at each iteration. Detailed mathematical formulation is available in [29].
rhs_obj_series_2 This series is based on a hydroelectric valley (Ain River) industrial use case in France. Six dams and their different turbines are modeled for the next four days with an hourly time step. The differences across instances are electricity prices (in the objective function) and the varying flows of the different affluents (in the RHS vector) of the river Ain, which were collected from [5, 15]. This series is based on the instances that were received in response to a public call for additional datasets. The final instances were generated by perturbing the LHS, RHS, and objective function vector components, selected via a discrete uniform distribution up to ±20% of their values.
mat_rhs_bnd_series_1 This series is based on the MILP formulation of the multilevel supply chain of a fictitious cell phone company. A detailed description is available at [26].
mat_rhs_bnd_obj_series_1 This series of instances is also based on the HUC problem similar to the series rhs_obj_series_1, but every data component of the given MILP can vary here.
3 Evaluation criteria
This section discusses the criteria used to evaluate the proposed approaches for the competition. Participants were asked to provide a written report and code base corresponding to their submission. Then, the following two criteria were used for the final evaluation.
-
1.
Novelty and scope: innovativeness of the approach and its general applicability w.r.t. the varying components and magnitude of their variation.
-
2.
Computational excellence: ranking of the approach in terms of the performance score defined later in this section.
The written report had to include the following:
-
description of the methods developed and implemented, including any citations to the literature and software used,
-
computational results on the public instance series with constant constraint matrices,
-
analysis with at least \(\texttt {reltime}\), \(\texttt {gap}\), and \(\texttt {nofeas}\) scores (defined later in the section) averaged over all 50 instances and additionally over the five batches of instances 1 to 10, 11 to 20, 21 to 30, 31 to 40, and 41 to 50, and
-
any further analysis including the applicability of the approach to the instance series with varying constraint matrices.
Participants were allowed to use any existing software available in source code and freely usable for the evaluation. A submission was required to solve the instances of one series sequentially in the order specified by the input files. It was not permitted to parse and analyze instances in one series ahead of time, i.e., while solving the \(i^\text {th}\) instance in the series, a submission may use only information from the first \(i-1\) instances. A submission was not allowed to modify the solution for an instance after moving on to solve the next instance. A submission must run sequentially (1 CPU thread), use no more than 16 GB of RAM, and respect the total time limit for each instance series. Violations of the time limit for a single instance are penalized in the performance score.
Evaluating and comparing the performance of submissions necessitated the development of an appropriate scoring metric. Our goal was to create a comprehensive scoring system that takes into account various potential situations, ranked from the most favorable to the least favorable:
-
the instance is solved to optimality within a certain runtime, shorter runtimes are preferred;
-
the instance times out but provides an incumbent solution, smaller gaps are preferred;
-
the instance times out without yielding any feasible solution, which is undesirable.
While some existing measures, such as the confined primal integral of [8], work across these different situations, we also sought to reflect the aforementioned hierarchy of favorability in our scoring. It was crucial to strike a balance; we wanted to penalize, to a certain extent, failure to prove optimality or to find a solution, but also aimed to avoid unduly restricting approaches that engage in an early exploration phase, which might intentionally deteriorate performance for the first few instances of a series. To achieve this balance, we established a scoring range:
-
the instances solved to optimality within the time limit were assigned scores between 0 and 1,
-
the instances that timed out with an incumbent solution received scores between 1 and 2, and
-
the instances that timed out without any feasible solution were given a score of 3.
Specifically, let \(s = 1, 2, \dots , S\) denote the index of the instance series, each consisting of \(i = 1, 2, \dots , 50\) MILP instances. Then, the performance on a single instance (s, i) is measured via the scoring function
where
and
Smaller scores f(s, i) are better. In our mind, this scoring framework allows for nuanced evaluation, rewarding efficiency while maintaining fairness across different methodologies. Note that the transition between solving an instance and timing out with a small gap is “smooth” in the sense that the score approaches 1 from below as the solve time gets close to the time limit and 1 from above as the final gap approaches 0 for unsolved instances. In contrast, the term nofeas adds a fixed penalty term for not producing a primal solution.
For technical reasons, we also had to address the situations where submissions exceed the time limit or stop a solution process prematurely before reaching the time limit. If the time limit is exceeded, reltime will be larger than 1. Submissions that stop before the time limit without reaching zero gap will receive reltime = 1. These considerations were mainly taken to close loopholes for the competition, such as intentionally ignoring a time limit to gather more information or aborting unpromising runs to keep the score low.
Then, for each instance (s, i), all participants were ranked according to their score f(s, i), each team receiving a rank r(s, i), where smaller is better. Teams with the same score received the same rank. For instances for which the primal solution was not feasible or the dual bound was not valid, a team received two times the worst possible rank independently of their f(s, i) value.
Following the motivation of the challenge to reward methods that use information from previous solving processes in order to gain performance, we assigned a gradually increasing weight to later instances in each series, i.e., we computed the final score as \(C = \sum _{s = 1}^{S} \sum _{i = 1}^{50} (1 + 0.1 i)\,r(s, i)\). The lower this final score, the better.
4 Results and outlook
The jury selected one winner and awarded one honorable mention from a total of twelve registrations and six final submissions containing both a written report and implementation code.
The winning submission entitled Progressively Strengthening and Tuning MIP Solvers for Reoptimization by Krunal Kishor Patel described in detail in [24] convinced the jury not only through its computational excellence that was displayed by the top ranked performance on almost all public and hidden data sets, but also through its broad applicability and attention to algorithmic details. Building on top of an existing, open-source LP-based branch-and-bound solver SCIP [9], the approach distinguished itself by targeting multiple aspects of the solving process in combination, reusing primal information and pseudo costs and improving parameter configurations online despite the limited number of observations.
An honorable mention was awarded to the submission Influence branching for learning to solve mixed integer programs online by the team of Paul Strang, Zacharie Ales, Come Bissuel, Olivier Juan, Safia Kedad-Sidhoum, and Emmanuel Rachelson. The jury was impressed by the successful adaptation of influence branching [16] to the reoptimization setting of the competition. The submission employed online hyperparameter tuning of different influence models via multi-armed bandit selection and consistently performed well on both public and hidden datasets.
With the created instance series available on the repository [1] and presented in Sect. 2, and their extensibility based on the generation scripts in [2], we hope to offer the research community a set of benchmarks to foster and evaluate future research efforts towards reoptimization. We aim to continuously develop [2] as a benchmarking library in the long run for the research on the reoptimization of MILPs. Accordingly, we are expanding the current dataset by generating additional series of instances and making them available to the research community via this repository.
References
MIP computational competition 2023. https://github.com/ambros-gleixner/MIPcc23/. Accessed: 1 Jun 2024 (2023)
MIP computational competition 2023 dataset generation scripts. https://github.com/sbolusani/MILP-WS-Lib. Accessed: 1 Jun 2024 (2023)
Mixed integer programming workshop series. https://www.mixedinteger.org/#mipworkshops
Achterberg, T., Koch, T., Martin, A.: MIPLIB 2003. Oper. Res. Lett. 34(4), 361–372 (2006). https://doi.org/10.1016/j.orl.2005.07.009
Andréassian, V., Delaigue, O., Perrin, C., Janet, B., Addor, N.: CAMELS-FR: A large sample, hydroclimatic dataset for france, to support model testing and evaluation. In: EGU General Assembly Conference Abstracts (2021). https://doi.org/10.5194/egusphere-egu21-13349
Angulo, G., Ahmed, S., Dey, S.S.: Improving the integer L-shaped method. INFORMS J. Comput. 28(3), 483–499 (2016). https://doi.org/10.1287/ijoc.2016.0695
Ausiello, G., Bonifaci, V., Escoffier, B.: Complexity and approximation in reoptimization. In: Computability in Context: Computation and Logic in the Real World, pp. 101–129. World Scientific (2011). https://doi.org/10.1142/9781848162778_0004
Berthold, T., Csizmadia, Z.: The confined primal integral: a measure to benchmark heuristic MINLP solvers against global MINLP solvers. Math. Program. 188(2), 523–537 (2021). https://doi.org/10.1007/s10107-020-01547-5
Bestuzheva, K., Besançon, M., Chen, W.K., Chmiela, A., Donkiewicz, T., van Doornmalen, J., Eifler, L., Gaul, O., Gamrath, G., Gleixner, A., Gottwald, L., Graczyk, C., Halbig, K., Hoen, A., Hojny, C., van der Hulst, R., Koch, T., Lübbecke, M., Maher, S.J., Matter, F., Mühmer, E., Müller, B., Pfetsch, M.E., Rehfeldt, D., Schlein, S., Schlösser, F., Serrano, F., Shinano, Y., Sofranac, B., Turner, M., Vigerske, S., Wegscheider, F., Wellner, P., Weninger, D., Witzig, J.: Enabling research through the SCIP Optimization Suite 8.0. ACM Trans. Math. Softw. (2023). https://doi.org/10.1145/3585516
Bock, R.: MAGIC gamma telescope. UCI Mach. Learn. Repos. (2007). https://doi.org/10.24432/C52C8B
Boland, N., Christiansen, J., Dandurand, B., Eberhard, A., Linderoth, J., Luedtke, J., Oliveira, F.: Combining progressive hedging with a Frank-Wolfe method to compute Lagrangian dual bounds in stochastic mixed-integer programming. SIAM J. Optim. 28(2), 1312–1336 (2018). https://doi.org/10.1137/16M1076290
Bolusani, S., Ralphs, T.K.: A framework for generalized Benders’ decomposition and its application to multilevel optimization. Math. Program. 196(1–2), 389–426 (2022). https://doi.org/10.1007/s10107-021-01763-7
Carøe, C.C., Schultz, R.: Dual decomposition in stochastic integer programming. Oper. Res. Lett. 24(1), 37–45 (1999). https://doi.org/10.1016/S0167-6377(98)00050-9
Desrochers, M., Soumis, F.: A reoptimization algorithm for the shortest path problem with time windows. Eur. J. Oper. Res. 35(2), 242–254 (1988). https://doi.org/10.1016/0377-2217(88)90034-3
Energetici, G.M.: Historical data day ahead market (2022). https://www.mercatoelettrico.org/En/download/DatiStorici.aspx
Etheve, M., Alès, Z., Bissuel, C., Juan, O., Kedad-Sidhoum, S.: Reinforcement learning for variable selection in a branch and bound algorithm. In: E. Hebrard, N. Musliu (eds.) CPAIOR2020: Integration of Constraint Programming, Artificial Intelligence, and Operations Research. Springer, Cham pp. 176–185 (2020). https://doi.org/10.1007/978-3-030-58942-4_12
Firat, M., Crognier, G., Gabor, A.F., Hurkens, C., Zhang, Y.: Column generation based heuristic for learning classification trees. Comput. Oper. Res. 116, 104866 (2020). https://doi.org/10.1016/j.cor.2019.104866
Gamrath, G., Berthold, T., Heinz, S., Winkler, M.: Structure-driven fix-and-propagate heuristics for mixed integer programming. Math. Program. Comput. 11(4), 675–702 (2019). https://doi.org/10.1007/s12532-019-00159-1
Gleixner, A., Hendel, G., Gamrath, G., Achterberg, T., Bastubbe, M., Berthold, T., Christophel, P.M., Jarck, K., Koch, T., Linderoth, J., Lübbecke, M., Mittelmann, H.D., Ozyurt, D., Ralphs, T.K., Salvagnin, D., Shinano, Y.: MIPLIB 2017: data-driven compilation of the 6th mixed-integer programming library. Math. Program. Comput. 13(3), 443–490 (2021). https://doi.org/10.1007/s12532-020-00194-3
Hassanzadeh, A.: Two-stage stochastic mixed integer optimization. Ph.D. thesis, Lehigh University (2015)
Jiménez-Cordero, A., Morales, J.M., Pineda, S.: Warm-starting constraint generation for mixed-integer optimization: a machine learning approach. Knowl.-Based Syst. 253, 109570 (2022). https://doi.org/10.1016/j.knosys.2022.109570
Koch, T., Achterberg, T., Andersen, E., Bastert, O., Berthold, T., Bixby, R.E., Danna, E., Gamrath, G., Gleixner, A.M., Heinz, S., Lodi, A., Mittelmann, H., Ralphs, T., Salvagnin, D., Steffy, D.E., Wolter, K.: MIPLIB 2010: mixed integer programming library version 5. Math. Program. Comput. 3, 103–163 (2011). https://doi.org/10.1007/s12532-011-0025-9
Ntaimo, L.: Disjunctive decomposition for two-stage stochastic mixed-binary programs with random recourse. Oper. Res. 58(1), 229–243 (2010). https://doi.org/10.1287/opre.1090.0693
Patel, K. K.: Progressively strengthening and tuning MIP solvers for reoptimization. Mathematical Programming Computation (2024). https://doi.org/10.1007/s12532-024-00253-z
Ralphs, T., Güzelsoy, M.: Duality and warm starting in integer programming. In: The proceedings of the 2006 NSF design, service, and manufacturing grantees and research conference (2006). http://coral.ie.lehigh.edu/~ted/files/papers/DMII06.pdf
SAP SE or an SAP affiliate company: MILP benchmarks cellphoneco. (2023). https://github.com/SAP-samples/ibp-sop-benchmarks-milp-cellphoneco
Schieber, B., Shachnai, H., Tamir, G., Tamir, T.: A theory and algorithms for combinatorial reoptimization. Algorithmica 80, 576–607 (2018). https://doi.org/10.1007/s00453-017-0274-8
Tanner, M.W., Ntaimo, L.: IIS branch-and-cut for joint chance-constrained stochastic programs and application to optimal vaccine allocation. Eur. J. Oper. Res. 207(1), 290–296 (2010). https://doi.org/10.1016/j.ejor.2010.04.019
Thomopulos, D., d’Ambrosio, C., van Ackooij, W., Stéfanon, M.: Generating hydro unit-commitment instances. TOP Off. J. Span. Soc. Stat. Oper. Res. (2023). https://doi.org/10.1007/s11750-023-00660-w
Witzig, J.: Reoptimization techniques in MIP solvers. Master’s thesis. TU Berlin (2014)
Acknowledgements
We wish to thank all participants in the competition for their submissions, Daniel Bienstock for providing us with computational infrastructure in order to evaluate the submissions, Dimitri Kniasew from SAP for submitting one hidden series of supply chain network planning instances, Felipe Serrano for discussing initial suggestions for the topic of the competition, and Domenico Salvagnin for participating in the final evaluation process.
Funding
Open Access funding enabled and organized by Projekt DEAL. The work for this article has been partly conducted within the Research Campus MODAL funded by the German Federal Ministry of Education and Research (BMBF grant number 05M14ZAM), supporting S. Bolusani and M. Besançon. J. Paat was supported by a Natural Sciences and Engineering Research Council of Canada Discovery Grant [RGPIN-2021-02475]. G. Muñoz received financial support from the Chilean National Agency for Research and Development (ANID) through PIA/PUENTE AFB230002.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bolusani, S., Besançon, M., Gleixner, A. et al. The MIP Workshop 2023 Computational Competition on reoptimization. Math. Prog. Comp. 16, 255–266 (2024). https://doi.org/10.1007/s12532-024-00256-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12532-024-00256-w