Upcoming Challenge: Hypergraph Partitioning

Thank you for the response Aoibheann. I am glad to hear that you are actively working to address the issue with greedy algorithms. I did read that the reliability factor was going to be the base of the project soon, so hopefully all new challenges will adopt this.

I do understand where you are coming from about the baseline and it does make sense that it needs to be efficient and consistent, especially considering the instance verification and Sybil defense. I didn’t give that a thought in my response to be honest.

That said, I do still feel there’s an important balance to strike. Even with the new reward system and fuel limits encouraging better approaches, there’s always the risk that “innovators” focus on just beating a weak baseline rather than pushing toward truly high-quality solutions.

Would it make sense to benchmark the baseline against something stronger (like KaHyPar or a multi-level solver) to make sure the better_than_baseline factor actually challenges people to innovate? If the baseline is too weak, even with the reward function changes, we might not be making full use of the computational resources now available.

I look forward to seeing how this all plays out :slight_smile:

2 Likes