I wrote a post on the free, open source tool BenchmarkDotNet but I wanted to mention a common pitfall that you might fall in to if you are not careful.
The warning is simple: when writing benchmarks, always use the results of your calculations.
Consider these benchmarks ...
[Benchmark(Baseline = true)]
public void Sqrt1()
{
for (var ii = 0; ii < N; ++ii)
Math.Sqrt(ii);
}
[Benchmark]
public double Sqrt2()
{
double result = 0;
for (var ii = 0; ii < N; ++ii)
result = Math.Sqrt(ii);
return result;
}
If we look at the results of the above benchmarks ...
Method | N | Mean | Error | StdDev | Ratio | RatioSD | Rank |
---|---|---|---|---|---|---|---|
Sqrt1 | 1000000 | 291.8 us | 5.79 us | 6.66 us | 1.00 | 0.00 | 1 |
Sqrt2 | 1000000 | 4,006.9 us | 55.00 us | 51.45 us | 13.64 | 0.29 | 2 |
How can Sqrt2
be almost 14 times slower than Sqrt1
? The answer lies with the JIT compiler ...
SharpLab is another great tool that will show you what the JIT compiler will do initially. In the case of Sqrt1
, it will compile out dead code because the result of the square root isn't used so it doesn't need to run the instructions to perform the square root.
So be careful with how you write your benchmarks otherwise you may not be measuring what you think you are measuring!
Top comments (0)