I want to raise a concern about the current benchmarking dynamics in TIG.
From my perspective, many benchmarkers are naturally pushed toward a simple strategy: increase hardware capacity and run already-known algorithm + hyperparameter combinations that are known to be profitable. This is rational from the benchmarker’s point of view, but I am not sure it maximizes the long-term innovation value of TIG.
The core issue is the balance between exploration and exploitation.
Right now, exploitation is much safer:
-
use an already-proven algorithm;
-
copy or converge toward known hyperparameters;
-
scale compute;
-
optimize for predictable short-term rewards.
Exploration is much riskier:
-
test new algorithms that may fail;
-
spend compute on hyperparameter search;
-
compare many configurations across tracks;
-
discover good settings that others can quickly copy once visible;
-
receive no clear extra reward for being the first to do the hard tuning work.
This creates a possible problem: TIG may drift into a hardware race, where the main advantage becomes who can deploy more compute, not who can find better algorithms or better algorithm configurations.
I do not think hardware is useless. Compute is obviously necessary for solving challenges, securing the network, and producing benchmark data. But compute should serve algorithm discovery. If most compute is spent repeatedly running the same known algorithms with the same known parameters, then the system may not be extracting the maximum innovation value from the available hardware.
In my opinion, the more valuable direction for TIG would be to encourage benchmarkers to allocate more compute toward:
-
testing newly submitted algorithms earlier;
-
running systematic hyperparameter sweeps;
-
comparing algorithms under equal fuel or runtime budgets;
-
evaluating performance across different tracks;
-
discovering algorithm + parameter combinations that improve quality per unit of compute.
The problem is that the benchmarker who performs this exploration takes the cost and risk, while the result can often be copied by others. This creates a classic incentive mismatch: exploration is expensive and uncertain, while exploitation is safer and more immediately profitable.
Why this matters for TIG
TIG’s main value proposition is algorithmic innovation. Benchmarkers are not only miners; they are also the market mechanism that helps discover which algorithms are actually useful.
If benchmarkers mostly scale hardware instead of exploring the algorithmic search space, the signal becomes weaker.
A new algorithm may be strong, but if nobody tests it properly, or if nobody spends enough time tuning its hyperparameters, it may look bad or remain ignored. In that case, TIG could miss useful innovation simply because the incentives favor copying proven strategies over testing uncertain new ones.
Possible improvements
I am not suggesting that TIG should punish large benchmarkers or make compute less important. Instead, I think the protocol or ecosystem could add stronger incentives for exploration.
Possible ideas:
-
Exploration rewards
Reward benchmarkers who test newly added or low-adoption algorithms early, especially if they find competitive configurations. -
First-discovery credit for strong hyperparameter configurations
If a benchmarker is the first to find a strong algorithm + hyperparameter combination, they could receive temporary recognition or reward before the configuration becomes widely copied. -
Temporary privacy for discovered parameters
As another option, a benchmarker who first discovers an effective hyperparameter configuration could be allowed to keep those parameters hidden from public view for a limited period of time. For example, the parameters would become public only after a defined delay. This would create an additional reward for exploration: the benchmarker who spent compute and time finding a strong configuration gets a short window of advantage before others can copy it.
I believe this matters because TIG should not become only a hardware race. Hardware should be a tool for discovering better algorithms, not the final source of advantage by itself.
The goal should be: more compute, yes — but compute used intelligently across more algorithms, more tracks, and more hyperparameter combinations.
I would like to hear what other benchmarkers, innovators, and the TIG team think about this.