## Dealing With Rounding Errors in Numerical Unit Tests

Monday, December 1, 2008 – 4:00 AMI’ve been writing some unit tests which attempt to verify some mathematical modeling results. Here’s a test that creates a physical model (of a universe) containing some stars and verifies the initial energy of the system.

[Fact] public void Energy() { Universe target = new Universe( new TestSimpleUniverseInitializer(), new ForwardEulerIntegrator()); target.Initialize(); Assert.InRange(target.Energy(), 1.99713333333333, 99713333333334);

}

There are two problems with this test. Firstly, I had to figure out the ranges in my head. Secondly the test’s intent isn’t expressed correctly. I don’t really mean a range, I mean equal within some margin of rounding error.

What I really want to say is that I expect the energy of the system to be equal to a value within a certain margin of error. Something like this:

Assert.Equal(target.Energy(), 1.99713333333333, new ApproximateComparer(0.0000001));

If turns out that with xUnit‘s **Assert.Equal** method I can specify my own **IComparer** to do just that. The **ApproximateComparer** is a new implementation of** IComparer<>** that returns an equality result for values that are within a margin of error and returns a standard **Comparer** result if not.

public class ApproximateComparer : IComparer<double> { public double MarginOfError { get; private set; } public ApproximateComparer(double marginOfError) { if ((marginOfError <= 0) || (marginOfError >= 1.0)) throw new ArgumentException("..."); MarginOfError = marginOfError; } public int Compare(double x, double y) // x = expected, y = actual { if (x != 0) { double margin = Math.Abs((x - y) / x); if (margin <= MarginOfError) return 0; }

return new Comparer(CultureInfo.CurrentUICulture).Compare(x, y); } }

I wrote this TDD using the **Theory** attribute provided by the xUnit. I find theories really nice for developing this kind of test. They let you capture several similar edge cases within one test.

public class ApproximateComparerTests { [Theory] [InlineData(0.0)] [InlineData(-1.0)] [InlineData(1.0)] public void MarginMustBeBetweenZeroAndOne(double margin) { Assert.Throws<ArgumentException>(() =>

{ new ApproximateComparer(margin); }); } [Theory] [InlineData(100.0, 100.0)] [InlineData(100.0, 101.0)] [InlineData(101.0, 100.0)] public void TwoNumbersAreEqualIfWithinOnePercent(double x, double y) { IComparer<double> target = new ApproximateComparer(0.01); Assert.Equal(x, y, target); } [Theory] [InlineData(100.0, 102.0)] [InlineData(102.0, 100.0)] [InlineData(100.0, 100.0)] public void ShouldBehaveLikeNormalComparerForNumbersOutsideTheMargin(

double x,

double y) { IComparer<double> target = new ApproximateComparer(0.01); Assert.Equal(new Comparer(CultureInfo.CurrentUICulture).Compare(x, y),

target.Compare(x, y)); } [Fact] public void ShouldBehaveLikeNormalComparerWhenComparingToZero() { IComparer<double> target = new ApproximateComparer(0.01); Assert.NotEqual(0.0, 0.0001, target); } }

You can overdo this. Note how each set of inputs yields the same result. I’m simply testing edge cases. You can overcomplicate your use of theories if you start using them to provide inline data that specifies both the inputs and outputs of the test.

**Updated April 14th 2009: **I updated the code in this post to cope with the case where the expected value is zero. As you’ll see from the tests and the code for an expected value of zero the approximate comparer reverts to the default comparer and any non-zero value will evaluate as *not* equal.

The **ApproximateComparer** is best used for comparing non-zero values where the expected value is also not known. For example you run two different calculations and expect them to agree within some margin of error. This is what I actually wrote this for.

You can also use the xUnit **InRange** and **NotInRange** assertions when comparing values where the expected value is known. I still prefer the approximate comparer here as I think the resulting code is slightly more expressive.

For dealing with comparisons that fail due to rounding errors – where the expected and actual value only differ due to the limits of numerical precision then consider something like the dnAnalytics **Precision.EqualsWithTolerance **approach (see Petrik’s comment below).

## 7 Responses to “Dealing With Rounding Errors in Numerical Unit Tests”

Hi Ade

This is a cool way of testing numeric values. I really have to look at what xUnit has to offer. Would you say it is better suited for numerical tests than NUnit etc?

Also I thought you might be interested to know that the dnAnalytics library (www.codeplex.com/dnanalytics) will have a Precision class in the next release (disclaimer I am a contributor to the dnAnalytics library). This class provides Equality and compare methods for floating point values. Comparisons can be made based on the number of significant decimals and on the number of floating point values between two numbers.

For instance your example could be written as:

[Fact]

public void Energy()

{

Universe target = new Universe(

new TestSimpleUniverseInitializer(),

new ForwardEulerIntegrator());

target.Initialize();

Assert.IsTrue(Precision.EqualsWithTolerance(target.Energy(), 1.99713333333333, 1);

}

This checks if target.Energy() is within one floating point value from 1.99713333333333 (which may or may not be exactly what you want).

Petrik

By

Petrikon Dec 4, 2008Hi Petrik,

I prefer xUnit.NET over NUnit largely because it’s written from the ground up to take advantage of lots of the newer features of .NET. NUnit started off around .NET 1.0 and has had these new things added over time.

I really like xUnit’s extensibility. This is the second challenge to testing I’ve been able to solve simply be extending the framework (see the StrictFact attribute for the other one).

I’ll have to check out the dnAnalytics library when I get a chance.

Ade

By

Ade Milleron Dec 4, 2008The only problem with this approach is it’s a bit naive; what happens when x is 0.0? You divide by 0, the result is undefined (I got NaN once and Infinity another time, oddly enough), and the comparison returns false.

Petrik’s response probably uses a technique that exploits the representation of double in memory and is more general-purpose.

By

Owenon Apr 14, 2009Ew it doesn’t work well if x is close to 0, either. For example, 0.01 and 0.0001 are obviously close. However, (x – y) / x is 0.99 in this case, which would require a much larger “margin of error”. It gets worse as x approaches 0.

However, this is irrelevant to what was the point of your post: that you can easily add an arbitrary comparison to xUnit.

By

Owenon Apr 14, 2009Owen,

Good point. The current code doesn’t deal with zero very well. Yes the general point is that xUnit is extensible but I know people copy/paste code so I’ve updated the code and added some guidelines about where you might want to use the ApproximateComparer.

As you point out there’s nothing to stop you writing something that better fits your needs.

Ade

By

Ade Milleron Apr 14, 2009Just in case anyone is curious, if you are using the Microsoft.VisualStudio.QualityTools.UnitTest framework the AreEquals will accept a delta for comparison purposes…

AreEqual(expected, actual, delta, message);

By

Eric Malamisuraon Nov 24, 2009