Breakthrough Submission: Vehicle Routing

As some of you may have noticed, our first Breakthrough Rewards submission was made public today for Vehicle Routing.

Please see here: Breakthrough Evidence

There are two code submissions that embody the method described above:

Code 1

Code 2

You are invited to examine the evidence and code and prepare for the token-weighted vote on the submissions eligibility for breakthrough rewards which will open at the beginning of the next round (round 50). Voting closes at the end of round 50.

2 Likes

From a high-level overview, this solver is essentially stitching together several well-known VRP heuristics and improvement techniques into one pipeline rather than inventing an entirely new paradigm. Here are the relevant parts that show it is a combination of established ideas:

Savings-based construction
The code uses a structure reminiscent of the classic Clarke & Wright savings algorithm. You can see this in the creation of an initial “savings list” (the pairwise combination of nodes and their associated savings value) and then merging routes if it is capacity-feasible and beneficial. This is a direct nod to the standard savings approach to VRP.

Parametric reweighting of savings
There is a mechanism for perturbing or tweaking parameters params[i] and params[j] to recalculate savings. This is effectively a small “neighbor generation” step (similar to local search or metaheuristics like Simulated Annealing or Iterated Local Search), where we tweak parameters that influence the construction/merge process.

Acceptance criterion using probability
The portion:

else if rng.gen::<f32>() < (-delta / scaling_factor).exp() {
    current_params = neighbor_params;
}

is reminiscent of the Metropolis criterion in Simulated Annealing. Although not a full-blown SA framework (there’s no explicit temperature schedule here), it’s clearly adopting a “sometimes accept worse moves” approach from known metaheuristics.

Local Search (2-Opt + Inter-Route Swaps)
After building or perturbing solutions, the code uses classical VRP local-improvement heuristics:

  • 2-Opt (intra-route improvement).
  • Swap moves (inter-route improvement).

Both of these are standard, well-established VRP improvement operators.

Iterative multi-start structure
The solver tries multiple initial solutions (different seeds or initial values). Then it picks and refines whichever turned out best. That multi-start pattern is yet another known strategy in heuristic design.

Putting these observations together, it’s fair to say:

  • This algorithm is not “new” in the sense of introducing a novel VRP algorithm.
  • It is a hybrid combining different established techniques (Savings Algorithm + local search + partial acceptance of worse solutions, etc.) in one framework.
2 Likes

Seems like an output of ChatGPT

EDIT: Just to be clear, these points are already being considered in evidence and explained the difference between previous studies and this new technique.
I will just highlight some points that are misinterpreted in this post.

The post lumps “parametric reweighting” into a standard local search tweak, but it misses that the solver uses node-specific parameters, not a single or handful of global parameters. This difference is non-trivial: each node’s merging preference is adaptively tuned, which shapes the entire route structure in ways not seen in typical “one-parameter” expansions of Clarke Wright. The post basically calls it “a small ‘neighbor generation’ step,” but it’s actually the core engine for reconstructing or merging routes in each iteration.

Then calling it “not a full-blown SA framework,” he glosses over the synergy between param acceptance and immediate local search improvements. Even though there may not be an explicit “temperature schedule,” the exponential acceptance of new param sets is integrated within solution reconstruction cycles—this dynamic interplay is key. Then post stops at “it’s reminiscent of SA,” ignoring how the method re-invokes savings merges with the updated parameters and then systematically applies multi-route local search. That cyclical process isn’t a standard, off-the-shelf pattern.

After this, his post acknowledges local search operators are standard (which is true), but it fails to mention that the solver aggregates all possible swaps across routes, ranks them, and applies them in a “best-improvement-first” pass that prevents conflicting moves in one iteration. This aggregator approach can drastically reduce the number of partial or contradictory local improvements, effectively making the local search global in scope for each iteration. It’s not just “standard local search thrown in.”

The post suggests a typical multi-start or multi-iteration pattern (“tries multiple initial solutions and picks the best”). But the real approach does more than just random restarts: it continuously updates the node parameters, re-scores merges, and re-applies local search. Dismissing it as “multi-start structure” overlooks how deeply the constructive and improvement phases are integrated.

And as conclusion, most any new heuristic can be described as combining “building blocks”. Novelty in combinatorial optimization often arises from how those building blocks are interconnected (e.g., a cyclical parameter reconstruction approach integrated with multi-route local search). The post doesn’t engage with the possibility that the synergy or architecture is distinct.

1 Like

Thoughts on the Algorithmic Submission for a Capacitated Vehicle Routing Problem (CVRP) Solver

High-Level Algorithm Flow

The solver operates as a multi-run, iterative metaheuristic:

  1. Parametrized Clarke–Wright Construction: A modified Clarke–Wright savings algorithm, where each customer has a unique parameter influencing route formation, constructs an initial solution.
  2. Parameter Perturbation: Each customer’s parameter is slightly adjusted to explore the solution space.
  3. Solution Improvement: Local search, employing 2-Opt and inter-route swaps, optimizes the solution built with the new parameters.
  4. Parameter Acceptance: Uses a Boltzmann-like rule akin to Simulated Annealing to accept or reject the new parameter set, guiding the search in parameter space.

Two distinct runs are performed, starting with node parameters set to 1.0 and 2.0 respectively; the solver retains the best global solution across both runs.


Detailed Logic and Code Steps

Initialization and Feasibility Checks

  • Feasibility Heuristic: A quick estimate checks if the instance is overly constrained. If infeasible, the solver aborts. (This is a TIG-specific optimization.)
  • Parameter Initialization: Runs are executed twice, initializing all node parameters to 1.0 in the first run, and 2.0 in the second.

Constructing a Route via Savings

  1. Create Initial Savings List:
    Following the Clarke–Wright logic, for each pair of distinct customers (i, j) , the code stores a potential “merge” record if the distance \text{dist}(i, j) is below half the maximum pairwise distance (a filtering step).

  2. Recompute and Sort Savings:
    A classical Clarke–Wright-like formula is used, but weighted by per-node parameters. In essence:

    \text{Savings}(i,j) = (\text{params}[i] + \text{params}[j])\bigl(d_{0i} + d_{j0} - d_{ij}\bigr)

    The list is then sorted in descending order of \text{Savings}(i, j) .

  3. Create a Solution:
    Each customer begins in its own route. Merges are attempted if capacity is not violated and the customers belong to different routes. Routes are finally wrapped with depot 0 at both ends.

Local Search (Postprocessing)

  • Intra-route 2-Opt: Eliminates crossing edges to shorten routes.
  • Inter-route Swaps: Enumerates beneficial swaps between routes, sorting them by descending improvement. A best-first aggregator applies each feasible swap without conflicts.

Parameter Tuning and Acceptance

  1. Neighbor Generation: Slightly perturb each node’s parameter within \pm 0.05k , clamping to [1,2] . Increase k if the search stalls.

  2. Rebuild + Postprocess: Re-sort the savings list using updated parameters, rebuild routes, then apply local search.

  3. Acceptance Criteria: If the new solution is cheaper, accept it. Otherwise, accept with probability \exp(-\Delta / \beta) , where \beta \approx 0.005 \times (\text{current cost}) . This step is analogous to simulated annealing.[1]

  4. Updating the Best Solution: If accepted and better than the current best, save it. Each run terminates after 200 iterations or if an early stop is triggered.

Multiple Start Values and Global Best

Once each run (starting at parameter 1.0 and 2.0) is complete, the best solution of the two is returned as the global best.


Relevant Literature and Similar Approaches

Clarke–Wright Savings Heuristic

Clarke and Wright[2] introduced the Savings heuristic, a foundation for numerous VRP approaches. Many variants add a shape parameter (\lambda) or employ multi-parameter expansions to accommodate factors such as demand. However, they typically apply one or two \lambda-like parameters globally[3][4][5][6][7]. In contrast, this solver assigns distinct parameters to individual nodes.

Randomised Savings / GRASP-Style Approaches

Randomised Savings methods[8][9] and GRASP-like (Greedy Randomised Adaptive Search Procedure) heuristics[10][11] typically inject randomness directly into the merge selection process. Here, the code indirectly randomises merges by altering per-node parameters.
GRASP can be thought of as a multi-start local search, in which each initial solution is generated using one greedy randomised heuristic and improved by local search - similar to the logic followed in this code.

Simulated Annealing or Metropolis Acceptance

Simulated Annealing[12] has been widely applied to VRPs, often accepting worse solutions probabilistically to escape local minima[13][14]. The current solver follows a similar principle but applies it at the parameter level: the entire solution is reconstructed using new node parameters and optimised before deciding on acceptance. Deterministic annealing variants, as in [15], use a similar acceptance formula based on the cost.

Perturbation to Explore Solution Space

A key strategy in the literature is to perturb problem data so that conventional construction heuristics, like Clarke & Wright, can avoid becoming trapped in suboptimal configurations. For instance, [16] proposed a hybrid GA that lightly randomises customer coordinates, akin to using node-specific multipliers rather than altering real-space coordinates. Both approaches rely on small stochastic shifts to steer solutions away from local optima, aligning with the “noising” concept[17][18] of perturbing instance data to broaden exploration. While [16:1] modifies coordinates and this code adjusts per-node parameters, both exploit the principle that shifting input data can yield significantly different (and often better) routes.

Iterated Local Search / Multi-start

Iterated Local Search for CVRP repeatedly perturbs a locally optimal solution, applies local search to find a new local optimum, and then uses an acceptance criterion to decide whether to move to the new solution or stay with the previous one, ultimately aiming to explore a wider solution space and find better solutions.
This solver’s repeated parameter perturbations, route reconstruction, local search and acceptance reflect the core ideas of Iterated Local Search (ILS)[19] or multi-start heuristics, albeit with a parameter-space focus rather than direct route manipulations.

Aggregator-Based Multi-Route Swaps

The best-first aggregator for inter-route swaps is reminiscent of Large Neighborhood Search (LNS)[20] and Variable Neighborhood Search (VNS)[21], which also seek to find larger-scale improvements. Filtering the savings list by distance (only merges below half the max pairwise distance) parallels large-scale VRP strategies that emphasize “promising” edges[18:1].

Main papers of interest

The paper by Pichpibul & Kawtummachai [22] enhances the classical Clarke–Wright (CW) heuristic by randomly reordering its savings list through a two-phase probabilistic selection. By avoiding the purely greedy ordering, it escapes local minima more effectively. A separate route post-improvement step (consisting of simple move and swap operations, both intra- and inter-route) further refines solutions.

The paper by Morgan et. al. [16:2] perturbs the traditional CW heuristic by altering customer coordinates before applying conventional heuristics. Conceptually, this mirrors per-node parameter tuning, since both methods inject randomness into the underlying savings calculations. By repeating these slight variations, solutions are diversified and local optima avoided, an idea closely related to “noising” and multi-start frameworks.


Potential Novel and Inventive Features

  1. Per-Node Parameters: A separate parameter for each node offers fine-grained control, surpassing typical global-parameter expansions.
  2. Parameter-Space SA Acceptance: The solver accepts a newly rebuilt solution based on a Boltzmann-like probability, but at the parameter level.
  3. Global Aggregator Local Search: Ranks all feasible inter-route swaps in descending order of improvement, applying multiple high-gain moves in one pass.
  4. Integrated Approach: Combining node-wise parameters, local search, and SA-style acceptance in a unified scheme. Its effectiveness hinges on how well these components synergize to produce competitive CVRP solutions relative to existing methods.

Summary

This research presents a new solver for the Capacitated Vehicle Routing Problem (CVRP) that combines a constructive heuristic (modified Clarke–Wright) with a metaheuristic framework (parameter-space simulated annealing) that incorporates local search. The solver iteratively perturbs parameters, reconstructs the solution using the constructive heuristic, applies local search (2-Opt and inter-route swaps) to improve the reconstructed solution, and then uses a Simulated Annealing-like acceptance criterion on the parameters to guide the search.

The main potentially novel and inventive innovations include:

  1. Node-Specific Parametrization: Leverages classical savings but extends it with per-node parameters instead of a single global parameter. Each node’s parameter is perturbed to diversify route construction.
  2. Acceptance in Parameter Space: A simulated annealing-like acceptance criterion operates on the parameters, not directly on the route. This parameter-space acceptance is not typically found in traditional route-based Simulated Annealing.
  3. Aggregator-Based Local Search: The solver performs both intra-route 2-Opt and an aggregator-based multi-route swap that batches and ranks potential inter-route exchanges in descending order of cost savings, enabling multiple high-gain moves in a single pass.

Whether these features rise to the level of a “breakthrough” depends on how effectively they outperform classical heuristics and whether the synergy can be demonstrated to be a true advance versus a recognizable extension of known methods. However, the code’s node-based parameter tuning, aggregator local search, and cost-scaling acceptance represent an interesting and possibly novel integration within the CVRP heuristic toolbox.



  1. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, 220(4598):671–680, 1983. ↩

  2. G. Clarke and J. W. Wright, “Scheduling of vehicles from a central depot to a number of delivery points,” Operations Research, 12(4):568–581, 1964. ↩

  3. T. J. Gaskell, “Bases for Vehicle Fleet Scheduling,” Journal of the Operational Research Society, 18(3):281–295, 1967. ↩

  4. P. C. Yellow, “A Computational Modification to the Savings Method of Vehicle Scheduling,” Journal of the Operational Research Society, 21(2):281–283, 1970. ↩

  5. H. Paessens, “The savings algorithm for the vehicle routing problem,” European Journal of Operational Research, 34(3):336–344, 1988. ↩

  6. İ. K. Altınel and T. Öncan, “A new enhancement of the Clarke and Wright savings heuristic for the capacitated vehicle routing problem,” Journal of the Operational Research Society, 56(8):954–961, 2005. ↩

  7. T. Doyuran and B. Çatay, “A robust enhancement to the Clarke–Wright savings algorithm,” Journal of the Operational Research Society, 62(1):223–231, 2011. ↩

  8. A. A. Juan, J. Faulin, R. Ruiz, B. Barrios, and S. CaballĂ©, “The SR-GCWS hybrid algorithm for solving the capacitated vehicle routing problem,” Applied Soft Computing, 10(1):215–224, 2010. ↩

  9. T. Pichpibul and R. Kawtummachai, “An improved Clarke and Wright savings algorithm for the capacitated vehicle routing problem,” ScienceAsia, 38(3):307–318, 2012. ↩

  10. P. Festa and M. G. C. Resende, “Hybrid GRASP heuristics,” in Foundations of Computational Intelligence Volume 3: Global Optimization, pp. 75–100. Springer, 2009. ↩

  11. A. Layeb, M. Ammi, and S. Chikhi, “A GRASP Algorithm Based on New Randomized Heuristic for Vehicle Routing Problem,” Journal of Computing and Information Technology, 21:37–48, 2013. ↩

  12. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, 220(4598):671–680, 1983. ↩

  13. I. H. Osman, “Metastrategy simulated annealing and tabu search algorithms for the vehicle routing problem,” Annals of Operations Research, 41(4):421–451, 1993. ↩

  14. M. Gendreau, A. Hertz, and G. Laporte, “A tabu search heuristic for the vehicle routing problem,” Management Science, 40(10):1276–1290, 1994. ↩

  15. M. Baranwal, P. M. Parekh, L. Marla, S. M. Salapaka, and C. L. Beck, “Vehicle routing problem with time windows: A deterministic annealing approach,” in 2016 American Control Conference (ACC), pp. 790–795, IEEE, 2016. ↩

  16. M. Morgan and C. Mumford, “Capacitated vehicle routing: perturbing the landscape to fool an algorithm,” in Proc. IEEE Congress on Evolutionary Computation (CEC), pp. 2271–2277, 2005. ↩ ↩ ↩

  17. R. H. Storer, S. D. Wu, and R. Vaccari, “New search spaces for sequencing problems with application to job shop scheduling,” Management Science, 38(10):1495–1509, 1992. ↩

  18. I. Charon and O. Hudry, “The noising method: a new method for combinatorial optimization,” Operations Research Letters, 14(3):133–137, 1993. ↩ ↩

  19. H. R. Lourenço, O. Martin, and T. StĂŒtzle, “Iterated local search,” in Handbook of Metaheuristics, F. W. Glover and G. Kochenberger (eds), pp. 321–353, Springer, 2003. ↩

  20. S. Ropke and D. Pisinger, “An adaptive large neighborhood search heuristic for the pickup and delivery problem with time windows,” Transportation Science, 40(4):455–472, 2006. ↩

  21. N. Mladenović and P. Hansen, “Variable neighborhood search,” Computers & Operations Research, 24(11):1097–1100, 1997. ↩

  22. T. Pichpibul and R. Kawtummachai. “New enhancement for Clarke-Wright
    savings algorithm to optimize the capacitated vehicle routing problem”. European Journal
    of Scientific Research
    78.1 (2012), pp. 119–134. ↩

5 Likes

Assessment – UAI c002 b001 Adaptive Savings Algorithm for the CVRP: Breakthrough_Discovery_Assessment_CVRP.pdf (151.5 KB)

@syebastian

1 Like

First of all, I want to thank everyone who shared their research results in this thread. Special thanks to Aoibheann for the very detailed research and for highlighting the factors that indicate the novelty of the method. Indeed, the heart of this approach is assigning a parameter to each node and searching for an effective set of parameters to obtain the optimal value. In this light, while John’s research mentions this, for some reason he overlooks it as a novel aspect of this method compared to other similar ones. I agree that benchmark A is not the most reliable, and to get an objective picture, ideally we should have results for benchmark X(Uchoa). I want to remind you that breakthrough primarily describes the novelty of the method while leaving room for various implementation variations and consequently for bringing the result to the desired gap. At the moment, I can provide results for benchmark X for instances in the range [100,256] as the work on the algorithm was focused on this range. Please note that 1) the algorithm has not been submitted yet (will be submitted at the end of the round), 2) there is still room for improvements, and achieving optimal results is quite possible using the novelty of this method.


Please pay attention that algorithm finds optimal values and close to optimal with desired below 0.2% gap. With enough time and dedication, it can be polished to achieve a much lower average gap, especially considering I only spent a few days on this (enhanced cw). Regardless of what changes might be made in the future, the core algorithm remains the same, which is another proof of why it deserves to be considered a breakthrough.

1 Like

Dear syebastian,

Thank you for your prompt response and for providing additional benchmarking results. I previously prepared the report included in John’s previous answer, so I am taking the opportunity to reply directly here to ensure a more direct discussion.

To clarify, my evaluation was conducted following peer-review standards, focusing on both novelty and performance relative to the current state-of-the-art (SOTA) published research and open-source algorithms. With many open-source implementations available for the CVRP, a breakthrough should, in my opinion, demonstrate improvements over this SOTA.

On Novelty. I have carefully understood the parameter adaptation mechanism you proposed. As highlighted in my earlier report, “ant-colony optimization” metaheuristics, popular in Operations Research in the 2000s, inherently include similar mechanisms for search parameter adaptation. For example, in [1], the method maintains a matrix \xi[i,j] of attractiveness for each possible (i,j) customer pair. The construction of solutions uses a probabilistic variant of the savings algorithm, where merging (i,j) pairs with higher \xi[i,j] values is more likely. The parameters \xi[i,j] evolve through the search, with those leading to better solutions being reinforced while others gradually diminish. Your proposed mechanism shares very close conceptual similarities with these approaches, though adaptation is restricted to fewer parameters. Overall, while there are differences in implementation, the core idea of adaptive parameters guiding solution construction in the savings algorithm is similar.

On Performance. Thank you for benchmarking the algorithm on the smaller instances of Set X. The error gaps of 1 to 2% you report (on the smaller instances) are noteworthy. Still, this might fall short of a breakthrough in this domain, in my view. This is in line with the comment of Aoibheann stating:

Whether these features rise to the level of a “breakthrough” depends on how effectively they outperform classical heuristics and whether the synergy can be demonstrated to be a true advance versus a recognizable extension of known methods

As noted in my previous report, current SOTA methods often achieve error gaps below 0.2% on the complete X set (that includes larger and harder instances), and multiple open-source methods achieving 1-2% gap are already available. While the results you presented are promising, especially in the short time frame dedicated to this development, getting closer to SOTA performance would strengthen your claims. Yet, this might require more extensive method redesign rather than simple fine-tuning, as stated.

Overall, I greatly appreciate the time and effort you have dedicated to this benchmarking. It is great to see all the efforts and dedication put into better solving the CVRP in this project!

[1] Reimann, M., Doerner, K. F., & Hartl, R. F. (2004). D-Ants: Savings based ants divide and conquer the vehicle routing problem. Computers & Operations Research, 31(4), 563–591.

1 Like

Hello vidalt.
First, thank you for taking the time for your research. Addressing the statement about novelty, similar components might have existed in other works, as many people have worked on solving this problem for years, but nevertheless, there are different variations with different results, and the version I presented hasn’t been seen anywhere else. Moreover, it’s quite obvious why the variants proposed before me didn’t achieve success. The version with global parameters doesn’t introduce significant quality improvement, while the version with probabilities for each pair has scalability issues. I presented an idea that can handle both small and large tasks.

Second, regarding the results, I probably didn’t quite understand what you meant, because Enhanced CW consistently shows a gap from 0% to 1% (not from 1 to 2% as in your response) and for some instances goes beyond this range. But this only indicates the implementation is incomplete, not about the quality of the idea. If you paid attention to the implementation, the exploration process is completely randomized. No feedback, no reinforcement of reliable parameters, and no non-linear randomization. Also, the local search itself can be improved and better adapted to this idea. For example, one of my implementations found optimal values for a small set of nodes. Another was more balanced for any set of nodes. The potential behind this is enormous, and everything comes down to having enough time to search for the perfect solution for neighborhood parameters exploration. But whatever implementations may exist in the future, everything is built around one concept, and it’s this concept that deserves the breakthrough title.

1 Like