Day 262 – Meander

I lost track of time.
I need to go sleep so I can get to my gym routine tomorrow morning.

I think it’ll be a much more pleasant experience, because today after work I saw a physio, and he did some great work on the shoulder. The proof will be in the waking up un-stiff, but I feel optimistic based on how well the shoulder has been feeling this evening. There was the most massive of knots right on my shoulder blade. He tried to get it with pressure alone, but besides being very painful, that didn’t help much. He used some kind of electronic device to loosen the muscle and that worked remarkably well. Like magic.

Most of this evening has been spent browsing the net though. Some research for the new QNAP; I want to set up a VM or two for Atlassian tools, and along the way I discovered Vagrant which sounds like a fun tool to explore. Then I got sucked in to my news-feed where it appears Microsoft is open-sourcing technologies left-right-and-centre… I’m not sure what has come over them, but they are having a brilliant shot at turning into the cool technology place to work.

I was planning to do some photography this weekend, but I may actually end up with my nose in the laptop.
Especially since everybody else will be doing their own things as well, so I might as well.

Maybe Sunday will be for being social.

Planning and Learning

This morning I successfully acquired the right SIM for my new phone. The transaction at Virgin Mobile Blacktown was quick and painless. I will need to spend a few more days making sure I get everything I care about across from the Galaxy Nexus to the Nexus 5, but then I can wipe the former and dispose of it.

In an hour I have a 30 minute cycle class to break up the day.
And then…

I’m going to start on a Web 2.0 adventure. I’ve meant to learn some new skills and build something for too long now. Time to bite the bullet.

I have the outline of an idea for a set of related problems I want to solve (scratching your own itch has been proven the best place to start). I have VS2013 Beta on my machine (so an upgrade will be in order first). And I have a list of compatible technologies to explore.

I guess this also means my personal GitHub account will finally get some love.

Wish me luck!

Floating Point One-Pager

Broadly speaking this post will go into increasing technical detail so that you may stop reading at the point where you either:

  1. Lose Interest
  2. Find the Answer you are After
  3. Lose the Ability to Follow My Garbled Writing

Note that regardless, this is by no means an exhaustive exploration of the topic. There is some significant glossing-over-details; I have merely tried to put “The Rules” into some context, and point the way to some problems you might want to delve deeper into by reading the references at the bottom for yourself.

After studying and digesting articles for a week, the only thing I can be certain of is that I have never fully comprehended all the dangers inherent in floating point numbers. And I never will. And neither, probably, will you.

All we’ll be able to do is take as much care as possible, and hope for the best.

Now, let’s go limit the damage we all might do.

Into the lair of IEEE 754 we go!

The Rules

If you don’t want to think, if you just want some rules-of-thumb to blindly follow, this is the section for you.

  • Multiplications and divisions are safe
  • Series of additions and subtractions should be ordered to operate on values of the closest magnitude first where possible
  • Do not compare floating point numbers directly
  • Do not use “Epsilon” to compare floating point numbers either
  • Use bankers rounding (round-to-even) whenever possible
  • Run code as 64-bit executables whenever possible
  • Never make any assumptions about how many decimal places are “good”, no, not even 2!

Comparing the Wrong Way

Most, if not all, programmers know that directly comparing two floating point numbers for equality does not work. If they are almost-but-not-quite-completely the same number, the code will fail.

0.1 x 10 != 1.0

Rounding errors along the way will kill any exact comparison.

The most common solution that gets suggested is to check if the absolute difference between two numbers is smaller than some tiny amount, and declare the numbers equal if that is the case:

if (Math.Abs(a - b) < 0.00001)
    ... NOTE: WRONG WAY TO DO IT! ...

if (Math.Abs(a - b) < Double.Epsilon)
    ... NOTE: WRONG WAY TO DO IT! ...

The former will work or not depending on the magnitude of “a” and “b”; if the two numbers are large relative to the “0.00001” bias it will work, but if they are smaller or even close to the bias itself, then the code will suddenly start failing.

The latter will probably never work; it is essentially the same as saying the two numbers need to be exactly equal, because “Double.Epsilon” is the smallest representable fraction in doubles, and the only thing that is smaller than the smallest representable fraction is equality.

Do not use either of these approaches.

Comparing using Relative Error

It is much better to do a comparison based on relative error, because the difference will scale with the inputs; the author of The Floating Point Guide recommends the following:

static bool RelativeCompare(double a, double b, double epsilon)
    // Compare NaNs and infinites
    if (a == b)
        return true;

    // Where relative error is meaningless use a direct epsilon
    double diff = Math.Abs(a - b);
    if ((a == 0) || (b == 0) || diff < double.MinValue)
        return diff < epsilon * double.MinValue;

    // Otherwise use relative error
    return diff / (Math.Abs(a) + Math.Abs(b)) < epsilon;

See The Floating Point Guide for a more detailed explanation and a link to an extensive test-suite for this code.

Comparing using ULPs

The most advanced solution to comparing floating point numbers is based on a key trick; when you re-interpret IEEE 754 floating point numbers as integers, the difference between two of those integers represents how many different floating point numbers exist between the two that were converted.

If you want a bit more explanation I suggest reading the 2012 edition of Comparing Floating Point Numbers by Bruce Dawson.

A .NET implementation of the relevant algorithms:

// Slow but safe version
static long DoubleToLong(double d)
    return BitConverter.ToInt64(BitConverter.GetBytes(d), 0);

// Fast but "unsafe" version
static unsafe long DoubleToLong(double d)
    return *((long*)&d);

static long UlpDistance(double a, double b)
    long intA = DoubleToLong(a);
    if (intA < 0)
        intA = Int64.MinValue - intA;

    long intB = DoubleToLong(b);
    if (intB < 0)
        intB = Int64.MinValue - intB;

    return Math.Abs(intA - intB);

static bool UlpCompare(double a, double b, long distance)
    return UlpDistance(a, b) <= distance;

The UlpDistance function returns the distance between two double precision floating point numbers. If this returns:

  • 0 – the floating point numbers are exactly identical
  • 1 – the floating point numbers are adjacent
  • 2 – there is one possible floating point representation between them
  • etc.

And by using the UlpCompare function you can indicate how close two floating point numbers must be to be considered equal, in a scale-independent way. In essence, the magnitude of the bias scales with the size of the values being compared.

A distance of about a million is enough to reasonably establish equality.

Problem Scenarios

Multiplication and Division

It may at first seem counter-intuitive that multiplication and division would not pose any problems. The reason for this is fairly simple though.

To illustrate, lets work in decimal and pretend we have 4 significant digits in our floating point representation:

(2.131 x 102) x (1.943 x 10-4)
= (non-rounded interim value) 4.140533 x 10-2
= 4.141 x 10-2

Even when two floating point numbers have wildly different exponents, the act of multiplying or dividing simply shifts the exponent of the result. And on either side of the calculation we have a full complement of significant digits.

Addition and Subtraction

For exactly the same reason, additions and subtractions can be problematic when dealing with big differences in exponents:

(2.131 x 102) + (1.943 x 10-4)
= 213.1 + 0.0001943
= (non-rounded interim value) 213.1001943
= 213.1
= 2.131 x 102

With a single addition or subtraction this loss of precision is unavoidable, because at the end of the calculation everything has to fit again into a finite amount of precision.

However, if you add a large series of numbers it suddenly becomes important to think about your additions:

(2.131 x 102) + (1.943 x 10-4) + … 999 times the same number
= 213.1 + 0.0001943 + … 999 more
= 213.1 + … 999 more
= 213.1 + 0.0001943 + … 998 more
= …
= 2.131 x 102

Every time an addition is performed in the CPU, the result needs to fit within our floating point representation and gets rounded to the original value. If we however perform the additions the other way around:

(2.131 x 102) + (1.943 x 10-5) + … 999 times the same number
= 213.1 + 0.0001943 + … 999 more
= 213.1 + 0.0003886 + … 998 more
= …
= 213.1 + 0.1943
= 213.2943
= 2.133 x 102

It’s not a big difference in this case, but depending on the number of additions involved and the relative magnitudes of the numbers it can make more or less of a difference. And all for the sake of re-ordering some additions.

If you were really keen you could probably develop some LINQ extensions in .NET that automatically re-order addition and subtraction sequences into the sequence in which the result is most accurate.

For now the point is: consider whether the values you are adding have wildly divergent magnitudes, and where possible try to order them to keep values with the same magnitude closer together to optimise for the most accurate result.


Have you ever used Math.Round in .NET? Were you a little confused at first? So was I.

You see, it uses Bankers Rounding by default (overloads exist where a rounding algorithm can be specified). When rounding 2.5, you’d usually expect to end up with 3, but Bankers Rounding doesn’t round all .5s equally. It rounds towards even, so 1.5 becomes 2, but 2.5 also becomes 2, then 3.5 becomes 4, and so on.

This is actually a very sensible default, but it isn’t well explained. In most cases the default for IEEE 754 operations is the same for the same reasons.

The problem with “normal” rounding is that it rounds 4 values down (0.1 – 0.4) and it rounds 5 values up (0.5 – 0.9). For one rounding operation this isn’t too big a deal, but most calculations round multiple times along the way. The compounding effect of this rounding bias is that results can slowly creep upwards with every rounding.

Bankers Rounding however on average rounds just as much up as it does down. As a result repeating rounding along the path of a calculation will jitter up just as much as down and on average not introduce any bias.

If you have end-users that might be confused by Bankers Rounding, then try to restrict “normal” rounding only to the last operation before display and keep using Bankers Rounding internally.

Representational Error

Below in the technical bits there is an explanation why IEEE 754 floating point numbers cannot precisely represent the value 0.1; it results in an infinite binary expansion:

0.1 (decimal) = 1.1001100110011… (binary) × 2−4

As a consequence it is safe to say that most real-world values cannot be accurately represented in double precision floating point numbers; as soon as you parse a value from a text file or a NUMBER from a database into a double, it is no longer completely accurate.

Basically, any real-world value becomes inaccurate before you even perform any operations on it.

The only reason this isn’t patently obvious all over the place is because floating point numbers tend to get rounded again before display, and in most cases this rounds away the error that this parsing introduces.

0.1 (decimal) becomes 0.10000000000000000555111… once it is parsed. Display routines will never show that many decimal places for a double, but it’s only a matter of a few calculations before that error creeps into decimals that do show up. Nine additions of the same number is all it takes to turn this into a real error.

It is important to remember that errors are pervasive and unavoidable.

It’s all about strategically structuring code to limit the ways in which these errors can compound and inflate.

Loss of Scale Issues

This is a particularly insidious problem, because it tends to not be evident when a program is first written. It literally takes time for this problem to become evident, typically once a program is running in production, or long after a library is first written.

This problem exhibits itself when measuring elapsed time (since application start, or some fixed point in the past), or when data volumes might increase over time.

When the elapsed time is small, or calculated values operate on limited data, this means that the magnitude of values will also be small. As a result, a large amount of the precision of a float is still in the fractional part of the result.

As time passes, as data volumes grow, as the magnitude of results grows, less and less of the floating point precision is represented in the fractional part.

If you assume your values will always have 2, 3, 5, any number of accurate decimal places, you may one day be taken by surprise when that stops being true. A single-precision float has only 6-9 significant digits. Once the magnitude of your value goes into the millions, you cannot count on any accurate decimals. Doubles have a bit more headroom, but it is still not impossible to run out of headroom with 15 significant digits.

And your calculations will start showing visible inaccuracies in any decimal places well before you hit the limits on the result.

The Technical Bits

Double precision floating point numbers are 64-bit numbers, laid out as follows:

Double precision floating point bits

The meaning of these bits is expressed in the formula in the header of this post:

Double precision floating point interpretation


  • The sign-bit is set for negative numbers
  • The fraction is applied as the binary “decimal” places in: 1.xxxxx…
  • The exponent is used to multiply this fraction across 2-1022 to 21023

As a result, double precision can roughly express:

  • 15 significant decimal places of precision
  • Values as small as ±5.0 × 10−324
  • Values as large as ±1.7 × 10308

Magic with Integers – 1

Less obviously, but more remarkably: this carefully designed binary format means that if you re-interpret two doubles of the same sign as 64-bit integers and subtract these integers, the difference will tell you how many distinct double precision floating point numbers lie in between. (0 = equal, 1 = none in between, 2 = 1 in between, etc.)

Conversely, if you increment or decrement the integer representation of a floating point number, it will give you the very next larger or smaller floating point number that can be represented.

Magic with Integers – 2

Between using the fraction bits with a leading 1 and an exponent of 1075 (corresponding to 252), and the de-normalized values including 0, doubles can contain exact representations of all integers from -(253) up to +(253). Within these bounds all arithmetic with integer inputs and results will be exact at all times.

Because of the 52-bit fraction though, once you pass 253, you will only be able to exactly represent even numbers. And past 254 only multiples of 4, and so on.

The distance between representable integers doubles past each power of 2.

Floating Point Spacing

This is also true in the opposite direction, where up to 252 all halves can be exactly represented, and up to 251 all quarters, and so on.

If you were to plot marks of exactly representable integers in the double precision floating point domain you would have a ruler where towards the left, passing each power of two, the marks get twice as close together and to the right, passing each power of two, the marks get twice as distant.

Double Precision Floating Point Ruler
Double Precision Floating Point Ruler

Representation Difficulties

The biggest problem with binary floating point numbers is that we live in a decimal world. Our reality has a nasty habit of thinking in terms of tenths, or hundredths, or thousandths.

Per the formula for floating point representations, something simple can prove impossible to represent:

0.1 (decimal) = 1.1001100110011… (binary) × 2−4

To exactly represent a decimal 0.1, you need an infinitely recurring fraction in binary. As a result, the closest possible representation in binary that fits within a double translates to approximately 0.10000000000000000555111… in decimal.

And depending on how these tiny inaccuracies stack it might get better or worse.

If you multiply 0.1 by 10.0, you get an exactly represented 1 due to some fortuitous rounding. If you add 0.1 ten times however, you get something slightly above 1. The type and order of arithmetic operations can either make the problem worse, or purely coincidentally, as if it never existed.

Internal Representation Trouble

If all this wasn’t difficult enough yet, in a well-meaning effort to improve accuracy, the x87 floating point instructions have 80-bit internal registers available. Arguably use of this extra precision does not harm the accuracy of calculations, but depending on when they do or do not get used, two applications executing the exact same sequence of operations may provide different results with no easy way to determine which of the two is the more accurate result.

It goes something like this:

  • Calculations that remain in internal registers use 80-bits
  • Except when they get written to/from memory, in which case they are rounded to 64-bit. When and whether this happens depends on how good the optimizer of the compiler is, and it may alter from version to version of the compiler itself.
  • Or if the FPU is switched into a mode where it forces 64-bit precision internally. From what I have read it is highly likely the .NET runtime does this, but none of the sources are committing to saying so outright.
  • Also, 64-bit executables will be 64-bit only because they will be using SSE and not x87, which has never and will never have 80-bit registers.

All-in-all, it is best to try and force applications to use 64-bit internally to ensure repeatable behaviour. And while at it, switch to a 64-bit version of Excel, and hope that Microsoft does not decide to do anything clever internally.

These discrepancies are the most annoying in dev/test scenarios where developers and testers get different results. It can be very hard to be certain whether a difference is due to discrepancies in accuracy, or if either algorithm is incorrect.

Decimal To The Rescue!

Working in .NET, one possible solution is to use Decimal type wherever possible.

The reason this helps has less to do with the fact this type uses 96 bits for the representation of the digits. It has a lot more to do with the exponent. The formula for Decimal is as follows:

(-1)s × (96-bit integer number) × 10-e, where e = 0-28.

The exponent is applied to powers of 10, which means that commonly occurring real-world fractions can be exactly represented. If you deal with money, a decimal can contain 2 exact fractional digits.

The only down-side is performance; you pay for these awkward-for-computers-to-process powers of 10 with up to 20-fold reduction in performance for arithmetic operations. This is definitely a worst-case-scenario number, but regardless it is a steep price to pay.


If you want to investigate further yourself, consider the following sources:

  1. Wikipedia: Double-precision floating-point format
  2. The Floating-Point Guide
  3. What Every Computer Scientist Should Know About Floating-Point Arithmetic
  4. IEEE 754-2008

On Reflection Performance in .NET

This must be one of the most covered topics in .NET; we all care about performance, and anyone familiar with .NET knows that the System.Reflection namespace sucks. I’ve written a library in the past to improve reflection performance, and recently I’ve been wondering what magic .NET 4 might make possible.

So, I decided to start from scratch with measurements. Extensive measurements on every alternative I could think of. And I learned a lot of unexpected things.

The Baseline

Throughout this post I will be using a very simple class to reflect over (I tried larger more complex classes just to make sure, but that had no significant impact on performance):

sealed public class Sample
    public int A { get; set; }
    public string B { get; set; }

The fastest implementation involves direct property accesses:

for (int repeats = 0; repeats < iterations; repeats++)
    sample.A = sample.A;
    sample.B = sample.B;

The slowest implementation involves .NET reflection:

PropertyInfo propA = typeof(Sample).GetProperty("A");
PropertyInfo propB = typeof(Sample).GetProperty("B");
for (int repeats = 0; repeats < iterations; repeats++)
    propA.SetValue(sample, propA.GetValue(sample, null), null);
    propB.SetValue(sample, propB.GetValue(sample, null), null);

Note that there are several environments that you could test this code in. Inside the IDE or directly from the Windows Explorer. As a debug build or as a release build. I would not be surprised if most performance tests are performed inside the IDE, and possibly even on a debug build. The following should explain why this is a mistake.

Baseline measurements (1m iterations)
Debug build Release build
In IDE Explorer In IDE Explorer
Direct Access 0.0532 s 0.0279 s 0.0206 s 0.0038 s
.NET Reflection 6.1271 s 4.9281 s 5.0297 s 5.0805 s

As you’d expect, running a debug build inside the IDE is clearly slower than any other environment. But even a release build inside the IDE is almost 5 times slower than running it directly from the shell. If you do performance measurements inside your IDE you’re just not doing it right.

So, realistically, the performance we’re concerned about is that 1:1337 ratio (I didn’t mess with the measurements to get that!) of the last column. I’d like to have a reflection implementation that’s significantly closer to the 1 than the 1337.

Implementation Options

When I originally implemented a very fast reflection in .NET 1.1 I ended up with IL-emitted code generated into a set of abstract methods. Since then Dynamic Methods, Expressions (including Compiled), and Function and Action delegates have been added to the environment. I had been wondering whether there now was a better way to implement fast reflection than my original method. So let’s have a look at the candidates.


Func<Sample, int> GetterA;
Action<Sample, int> SetterA;
Func<Sample, string> GetterB;
Action<Sample, string> SetterB;


public interface IReflection<T>
    int GetIntProperty(T item, int index);
    void SetIntProperty(T item, int index, int value);
    string GetStringProperty(T item, int index);
    void SetStringProperty(T item, int index, string value);


abstract public class Reflection<T>
    abstract int GetIntProperty(T item, int index);
    abstract void SetIntProperty(T item, int index, int value);
    abstract string GetStringProperty(T item, int index);
    abstract void SetStringProperty(
        T item, int index, string value);

The interface and class variants use an index property that corresponds to the properties in order (the production version obviously has methods to map property names to indexes, and vice-versa, as well as accessors for all the other property types I care about); in this case A = 0 and B = 1.

The delegate variant does not need an index, because each delegate is for a specific property of the class. Again, in production you’d have a wrapper around the delegates that acts as a lookup container for a given type.

To get a better handle on the performance possibilities, I’ll be hand-coding implementations for each of these before trying to use compiled expressions and IL-emit to see how fast I can make a run-time generated implementation.

Hand-coded Performance

The implementations for the IReflection<T> interface and the Reflection<T> class are pretty much identical, so following is only the latter.

sealed public class CodedReflection : Reflection&lt;Sample&gt;
    override public int GetIntProperty(
        Sample item, int index)
        switch (index)
           case 0: return item.A;
           case 1: throw new NotImplementedException();
           default: throw new NotImplementedException();

    override public string GetStringProperty(
        Sample item, int index)
        switch (index)
            case 0: throw new NotImplementedException();
            case 1: return item.B;
            default: throw new NotImplementedException();

    override public void SetIntProperty(
        Sample item, int index, int value)
        switch (index)
            case 0: item.A = value; break;
            case 1: throw new NotImplementedException();
            default: throw new NotImplementedException();

    override public void SetStringProperty(
        Sample item, int index, string value)
        switch (index)
            case 0: throw new NotImplementedException();
            case 1: item.B = value; break;
            default: throw new NotImplementedException();

Of course this code could be simpler, but it is purposely very regular to correspond to what an IL-emit might be able to achieve. Switch statements are very fast, and other than that it’s just a property access. This is pretty much as lean as you can possibly make a general property getter/setter.

I would put some code for my hard-coded delegate implementation, but… there is none. It turns out that we can just bind directly to the getter and setter methods for the properties and convert them to delegates. My bet was this was going to be by far the performance winner. What could be leaner than what is in essence a function pointer to the property access methods?

MethodInfo getterMethodA =

Func&lt;Sample, int&gt; GetterA =
    (Func&lt;Sample, int&gt;)Delegate.CreateDelegate(
        typeof(Func&lt;Sample, int&gt;), getterMethodA);

The actual test code using these access methods looks as follows for the interface/class:

Reflection&lt;Sample&gt; reflection = ...;
for (int repeats = 0; repeats &lt; iterations; repeats++)
    reflection.SetIntProperty(sample, 0,
        reflection.GetIntProperty(sample, 0));

    reflection.SetStringProperty(sample, 1,
        reflection.GetStringProperty(sample, 1));

And for delegates:

Func&lt;Sample, int&gt; getA = ...;
Action&lt;Sample, int&gt; setA = ...;
Func&lt;Sample, string&gt; getB = ...;
Action&lt;Sample, string&gt; setB = ...;
for (int repeats = 0; repeats &lt; iterations; repeats++)
    setA(sample, getA(sample));
    setB(sample, getB(sample));

The following performance data is all normalised to the fastest operation, and everything else indicates multiples in run-time for the operation.

Hand-coded performance (relative)
Relative performance
Direct Access 1.00x
Delegates 4.62x
Interface 5.36x
Class 4.31x
.NET Reflection 1336.97x

Surprisingly (to me), the delegates do not actually beat the class implementation. The class implementation may actually be a little faster, albeit not decisively. Interfaces are slower than the almost-equivalent class implementation, which doesn’t surprise since interfaces require an extra level of indirection to resolve.

The biggest lesson here is that we can in fact theoretically make a fairly fast implementation. We can get to within almost a factor 4 of hard-coded property access performance for release builds. That’s not bad at all. Now we just need to see if we can generate the required code at run-time.

Run-time Attempt 1: Compiled Expressions

I was going to try and use expression trees and compile them into code for all three of my alternative access methods.

static public Func&lt;Sample, int&gt; BuildGetterA()
    ParameterExpression itemParamExpr =
        Expression.Parameter(typeof(Sample), &quot;item&quot;);

    Expression&lt;Func&lt;Sample, int&gt;&gt; getterExpr =
        Expression.Lambda&lt;Func&lt;Sample, int&gt;&gt;(
             Expression.Property(itemParamExpr, &quot;A&quot;),

    return getterExpr.Compile();

There was a getterExpr.CompileToMethod(...) method that looked promising to generate code into an interface or class implementation. Alas, it turns out that this can only generate into static methods. That would just not solve this particular problem. As a result, I only have a performance measurement here for the delegate access method.

Compiled expression performance (relative)
Relative performance
Direct Access 1.00x
Delegates 23.09x
.NET Reflection 1336.97x

Aaaand… that’s like a cold shower. It turns out that something about the generated method is nowhere near as performant as directly accessing the accessor methods or going though a switch statement. I guess it doesn’t take much to throw this off, because a raw property access is about as simple a thing as you can do. It doesn’t take many IL instructions to double (or in this case, sextuple) the runtime of something that simple.

I guess this variant can go on the garbage heap… which is a shame, because expression trees are definitely more readable than IL-emit.

Run-time Attempt 2: IL-emit

This is definitely not going to be pretty. Generating raw IL is a very verbose process. To illustrate the fragments below merely implement GetIntProperty on the class and the GetterA alternative for the delegates.

Type sample = typeof(Sample);
MethodInfo getA = sample.GetProperty(&quot;A&quot;).GetGetMethod();
// ... other MethodInfos

MethodAttributes ma =
    MethodAttributes.Public |
    MethodAttributes.ReuseSlot |
    MethodAttributes.HideBySig |
    MethodAttributes.Final |
Label[] labels;

var an = new AssemblyName(&quot;IlGenApiAsm&quot;);
var ab = AppDomain.CurrentDomain.DefineDynamicAssembly(
    an, AssemblyBuilderAccess.Run);
var mb = ab.DefineDynamicModule(&quot;IlGenApiMod&quot;);
// ...

var tb = mb.DefineType(&quot;Accessor&quot;,
    TypeAttributes.Class |
    TypeAttributes.Public |
var method = tb.DefineMethod(&quot;GetIntProperty&quot;, ma,
    new Type[] { typeof(Sample), typeof(int) });

var il = method.GetILGenerator();
labels = new Label[] { il.DefineLabel(), il.DefineLabel() };
il.Emit(OpCodes.Switch, labels);
il.EmitCall(OpCodes.Callvirt, getA, null);

// ... emitting other methods 

Reflection&lt;Sample&gt; accessor = (Reflection&lt;Sample&gt;)

var dm = new DynamicMethod(&quot;Build8GetA&quot;,
    typeof(int), new Type[] { typeof(Sample) });
il = dm.GetILGenerator();
il.EmitCall(OpCodes.Callvirt, getA, null);

GetterA = (Func&lt;Sample, int&gt;)
    dm.CreateDelegate(typeof(Func&lt;Sample, int&gt;));

This obviously looks dreadful, but once you drive this logic from .NET Reflection to discover the properties of a given class once, most of this turns into fairly neat patterns and can be hidden from your view forever more.

The more important question is; does this get us anywhere near that factor 4.3 that the hand-crafted code achieved?

IL-emit performance (relative)
Relative performance
Direct Access 1.00x
Delegates 23.92x
Interface 6.79x
Class 4.54x
.NET Reflection 1336.97x

Again, surprise at how bad the delegates do… it appears it is merely the overhead of adding another level of indirection (remember that we can wire the delegates directly to the property accessors at about a factor 4.6). The interfaces do not do as great as the hand-coded version, but the classes are close enough to make no difference.

Excellent. And disappointing.

It turns out that the implementation I already had was pretty much optimal even with all the new features of .NET 4 at my fingertips. But at least I’ve now proven there is no better alternative among these.

We Need More Flexibility

Now, we’re not entirely home yet. There is a deficiency in the abstract class implementation I’ve shown.

abstract public class Reflection&lt;T&gt;
    int GetIntProperty(T item, int index);
    void SetIntProperty(T item, int index, int value);
    string GetStringProperty(T item, int index);
    void SetStringProperty(T item, int index, string value);

It’s all there in that single letter T. If I were to try and implement a serialisation engine on top of this abstract class, I’d run into some trouble. I cannot write any general serialisation method if all I have is a class that needs to know the type we’ll be working on up-front.

I need something more like this:

abstract public class Reflection
    int GetIntProperty(object item, int index);
    void SetIntProperty(object item, int index, int value);
    string GetStringProperty(object item, int index);
    void SetStringProperty(object item, int index, string value);

And now we raise the spectre of casts. If I were to hand-code an implementation for this, I’d need to constantly cast item to Sample to be able to access the properties on it. Casts are expensive.

There’s an evil trick available to us. Very evil. Avert your eyes now, and never come back.

var method = tb.DefineMethod(&quot;GetIntProperty&quot;, ma,
    new Type[] { typeof(object), typeof(int) });

var il = method.GetILGenerator();
labels = new Label[] { il.DefineLabel(), il.DefineLabel() };
il.Emit(OpCodes.Switch, labels);
il.EmitCall(OpCodes.Callvirt, getA, null);

At a casual glance, it may look like I copied the IL-generation from my Reflection<Sample> example. But look closer at the DefineMethod call.

Where are the casts? Well… funny that. I basically just omit them. I know that the argument is going to be a Sample because that’s why I am generating this code in the first place. And even funnier… the CLR lets me get away with it.

As long as the first argument is actually the right type.

Passing the wrong type can and will crash .NET, and I’m not even kidding. This is an extremely sharp tool, and you can really seriously cut yourself on it if you use it wrong. This is coding without a safety net. This is EVIL.

But it is also FAST.

When benchmarked, this code is exactly as fast as the strongly typed version through Reflection<T>. This is the best of both worlds. I can write general methods using reflection whilst getting strongly-typed performance, and it’s only about four times slower than direct property accesses.

And this is the implementation I’ll be sticking with for obvious reasons.

Exceptions – 5

Before diving into the final category of exceptions, I want to make a little detour into fundamentals. I promise this will be relevant shortly.

Preconditions / Post-conditions / Invariants
Formal Methods and Design by Contract are often a dim memory after a few years in a full-time software development job. It would be easy to conclude that therefore they do not get applied in real business. But in actual fact, contracts are everywhere.

Whenever you write a method, it’s name, result type, parameter names and parameter types are part of an ad-hoc specification covering preconditions and post-conditions.

int Square(int n);
void ValidateOrder(Order order);
Order MergeOrders(params Order[] validOrders);

I should be able to reasonably assume without looking at the code that:

  • Square” will return me the square of its argument
  • ValidateOrder” will probably throw some exception when the contents of “order” do not meet validation standards
  • MergeOrders” will create a new single order object out of a collection of other orders, provided they can be combined (and if not, likely throws an exception). Also, the name of the argument strongly implies that validation may need to be done prior to calling it.

It is of course possible that the names and types are misleading and these methods do something completely different, but in that case I’d argue that they are not meeting their implicit contracts.

Compare this with the following signatures were they to have exactly the same implementations:

int Calculate(int n);
void Process(Order order);
object HandleTogether(IEnumeration toCombine);

By simply changing some names and types I have destroyed a lot of the implicit documentation this method provided:

  • There is no indication what relationship there is between inputs and outputs for “Calculate“. Even worse, I can no longer reasonably assume this method succeeds for all integers “n” without looking at the documentation or implementation.
  • The name “Process” although technically accurate (but then, isn’t everything processing in some sense?) gives the misleading impression that it might in some sense execute the order. Exceptions could still be expected if processing fails, but it might prompt a defensive implementation predicated on the false assumption that there may have been side-effects.
  • And “HandleTogether” pretty much completely obscures both the nature of the operation and the preconditions that must be satisfied by its arguments. Let’s hope the documentation comments are actually helpful!

As these examples already alluded to, exceptions logically form a part of the specification of a method.

/// &lt;summary&gt; ... &lt;/summary&gt;
/// &lt;param name=&quot;validOrders&quot;&gt; ... &lt;/param&gt;
/// &lt;exception cref=&quot;ArgumentException&quot;&gt;
/// validOrders == null || validOrders.Length == 0
/// &lt;/exception&gt;
/// &lt;exception cref=&quot;ValidationException&quot;&gt;
/// Any of &quot;validOrders&quot; fails validation.
/// &lt;/exception&gt;
/// &lt;exception cref=&quot;MergeException&quot;&gt;
/// Not all &quot;validOrders&quot; have the same customer details.
/// &lt;/exception&gt;
Order MergeOrders(params Order[] validOrders);

There could potentially be a lot more involved, but now the exception documentation confirms and enhances the specification implicit in the method signature itself.

Note that there is one further improvement that could be made above; currently the first exception is an “ArgumentException“, which therefore corresponds to a precondition (see Usage Exceptions post). The “ValidationException” is presumably the exception thrown by the “ValidateOrder” method that we’d be using internally to make sure all the orders are valid before attempting the merge. And the “MergeException” is a new exception specific to this method that indicates incompatible orders.

In reality, all these should probably be preconditions to the method, and therefore be implemented as “ArgumentException” instances. It is in most cases much better to fail early before any calculations have been done.

Vexing Exceptions
In practice, “Vexing Exceptions” pretty much need to be dealt with in the same way as any other “Logical Errors”, but they indicate a badly designed API (see original overview post). In the remainder of this post I will not treat them separately, but I want to dedicate a few moments here to recognising and avoiding them when writing new code.

In the previous section I had a few example methods to illustrate extending method signatures with exception specifications.

/// &lt;exception cref=&quot;ValidationException&quot; /&gt;
void ValidateOrder(Order order);

/// &lt;exception cref=&quot;ArgumentException&quot; /&gt;
/// &lt;exception cref=&quot;ValidationException&quot; /&gt;
/// &lt;exception cref=&quot;MergeException&quot; /&gt;
Order MergeOrders(params Order[] validOrders);

And I commented that the “ValidationException” in “MergeOrder” corresponded to a possible result of using the “ValidateOrder” method, and that all conditions on “MergeOrders” would be better served being “ArgumentException” across the board.

To do so for the validation exception would mean that “MergeOrders” needs to implement a catch handler during its precondition checks and wrap a “ValidationException” into a descriptive “ArgumentException“. This is precisely a “Vexing Exception”, because we would be much better served by a second API variant of “ValidateOrder” that returns errors, or even just a boolean:

IEnumeration&lt;ValidationError&gt; ValidateOrder(Order order);
bool TryValidateOrder(Order order);

Then we can do the validation in our merging routine without having to catch exceptions and wrap them.

Whether or not transforming a “MergeException” into a “Usage Error” makes sense depends on a number of factors, including whether an up-front check would have to re-implement substantial portions of the logic from the body of the method. Sometimes it may be better to leave the exception unchanged.

Note however that either way we really need a further method:

  • If we make it a precondition, then the caller needs to be able to avoid passing in incompatible orders
  • If we leave it unchanged, we have another potentially vexing exception in case compatible orders cannot be guaranteed

The caller of the merge method needs to either structure the code so that it is implicitly guaranteed that orders passed into the method will be compatible, or there needs to be an “bool AreOrdersCompatible(...);” method so that a failing call can be potentially avoided where it might otherwise routinely occur.

Logical Errors” / “Exogenous Exceptions
And now that we have eliminated everything else, what exactly are we left with? It turns out that I lied in the last section… I am not quite done with “Vexing Exceptions” yet.

Since “Vexing Exceptions” are thrown under expected circumstances, where providing an API alternative that directly returns a result indicating those circumstances is the preferable approach, I think “Exogenous Exceptions” can best be summarised as follows:

“Exogenous Exceptions” correspond to unexpected circumstances that cannot be avoided but can potentially be resolved by the caller.

So, in a perfect world:

  • “Usage Errors” are only ever thrown and never caught, because they indicate a caller that does not respect preconditions
  • “System Failures” are only ever thrown by the environment and never by code and they are never trapped, because they indicate the environment has become unreliable and the application should be allowed to terminate
  • “Vexing Exceptions” never occur because all our APIs provide method alternatives to avoid them
  • “Exogenous Exceptions” are only ever thrown if a method cannot satisfy its post-conditions due to unexpected, unavoidable circumstances outside its control, and each type of exception corresponds uniquely to one type of remedial action

Let’s start with an illustrative example of a “Logical Error” from the .NET Framework itself.

    using (FileStream fs = new FileStream(&quot;...&quot;, FileMode.Open))
        ... load resource ...
catch (FileNotFoundException)
    ... load from elsewhere ...

As I was trying to come up with a good example of an “Exogenous Exception”, it became more and more clear to me that in-principle there are none. Every time an exception is thrown by a method, the question remains: is this an expected or unexpected exception? And the only code that can answer this question is the caller.

In the fragment above, it is tempting to say something along the lines of “you cannot avoid this exception; it is thrown when a file does not exist when you try to open it“, but somewhere in the implementation of the “FileStream” constructor, there is a line that determines whether the low-level Windows API succeeded or failed, and turns that into an exception. If I write code using the “FileStream” API, where I can routinely expect files I am trying to open will no longer exist, then this is suddenly a “Vexing Exception”.

The only reason I have no choice but to use an exception handler is that using “File.Exists(...)” does not help, because the file may go missing between calling this and the “FileStream” constructor. And there is no constructor alternative “FileStream.TryCreate(...)” that can allow me to normally handle this condition. Vexing indeed.

Note however that this does not mean that all is lost, and the naysayers about exceptions were right after all. Far from it. I think “FileStreamshould throw an exception if the file does not exist. But it should also have an alternative that doesn’t.

And this goes for all methods, because ultimately the only arbiter of what is expected to go wrong (“Vexing”) and what is not (“Exogenous”) is the calling code; it’s the use that determines the nature of the exception.

(Sidenote: this possibly explains the ongoing religious war over whether exceptions or error codes are the best way to handle errors. Those against exceptions tend to look at “Vexing Exceptions” as their rationale, whereas those in favour can only see “Exogenous Exceptions”. It turns out we really need both.)

To Be Continued…
I was going to finish up here with a description of how to implement methods, how to consume methods, and what can be done to formalise and automate some of the required discipline in all this…

…but this post is getting a bit long already, and I think having just the guidelines in a single post will provide a better reference.

(Really just a stalling technique so I can let my most recent lightbulb-moments filter into this before I come to a final decision.)

Exceptions – 4

In my last post on exceptions I covered “Boneheaded Exceptions” and why they should not be caught (and what to do about them instead). Next-up is another category that should hardly ever be caught… except in a very specific fashion.

System Failures” / “Fatal Exceptions” (also: the system is down)
These are exceptions that originate in the implementation of the execution environment. Some can get thrown by specific (types of) IL instructions, such as “TypeLoadException” or “OutOfMemoryException“. Others can get thrown at literally any instruction, such as “ExecutionEngineException“.

The two key observations about these exceptions is that they cannot be prevented (because they originate from the low-level execution of your code itself), and there are virtually no circumstances where your application code can do anything to resolve the indicated problem (something went wrong that is by definition out of the control of your code). They can happen at any time and there is no way to fix them; it should be obvious why they should not normally be caught.

If like me, you find yourself trying to construct a scenario where you might want to catch one of these, ask the following questions. If a type fails to load cleanly indicating a broken deployment, can you trust any further remedial action to even work? If you run out of memory, what kind of logic could you write that does not itself need to allocate memory? Worst of all; if the execution engine failed in some unspecified way, can you even rely upon correct execution of any further instructions?

Even if there are specific corner-cases where anything can be done at all, how much value would it add over just letting the application terminate from its illegal state and construct some external mechanism to restart it into a valid state instead?

So, what to do?
If the foregone conclusion is that these cannot be handled in any way, then all that is left is ensuring the application dies as gracefully as possible.

First and foremost, use the “try {} finally {}” pattern wherever possible. There may be cases where the “finally” will fail in part or whole due to the nature of the system failure, but it maximizes the chances that files flush the last useful fragments, transactions get cleanly aborted, and shared system resources are restored to a safer state.

Very few “System Errors” / “Fatal Exceptions” get caught explicitly in a handler. This is precisely because there is nothing specific that can be done to remedy them. There is however a very commonly used handler that deserves scrutiny; the much-reviled “catch (Exception ex) {}“.

Since there are precious few fatal exceptions that can be meaningfully handled in any fashion, it should be obvious that writing a handler purporting to deal with all of them is even more preposterous. That is why the following is the only valid pattern for a general exception handler:

    // Some code
catch (Exception ex)
    // ???

Only by re-throwing the exception at the end of the handler can we guarantee that all the various fatal exceptions keep bubbling up to the top of the application, where termination of the application is the final signal of an unrecoverable problem.

The following two questions need to be answered then:

  • What kind of “some code” could be protected in this structure?
  • What kind of logic can sensibly be placed at “???”

To start with the latter; when something non-specified goes wrong, the only sensible options are to either record details not generally available in a stack trace in some fashion, or to make general fixes to the state-space that “some code” may have trashed.

Recording additional detail can be done by either logging something somewhere about values of relevant variables at the time the execution failed, or alternately to wrap the exception in a custom exception that records the values in its properties (in which case it should hold the original exception as an inner exception!)

Writing a general fix for corrupted state-space can be difficult. As one extreme, the fatal exception may have occurred in the middle of an allocation inside the “Dictionary.Add()” method, and now you’re stuck with a dictionary in an inconsistent and unrecoverable state. It may however be possible to just replace the dictionary with a new empty dictionary in the catch handler, providing that does not break any invariants that need to hold. In many cases, the “some code” will have made state-space changes that cannot be credibly put back in some correct default state, at which point you should resist the temptation to write any catch handler. If you cannot do anything,… then don’t.

Now, it should be obvious what “some code” could be; anything that either can benefit from additional information about the local state-space being recorded when a problem occurs, or anything for which affected state-space can be restored to some kind of safe default that does not break any invariants. (An example of the latter might be a manipulation of a cache of some sort that fails; restoring the cache to an empty state does not invalidate it’s invariants. It may hurt ongoing performance, but it does neatly restore the local state into a valid default.)

How to fix Fatal Handlers?
Many libraries or applications will have fallen prey to catching and swallowing “Exception” somewhere (including code I have written myself). The logical-sounding rationale usually is something like “If anything goes wrong while doing this, then let me put some default behaviour in that is good enough in its stead”. Default behaviour can range from returning a default value, all the way up to just logging the exception and moving on, hoping for the best.

while (...some high-level loop...)
        ...some piece of processing logic...
    catch (Exception ex)
        LogException(&quot;Could not process, retry next iteration&quot;, ex)

On the face of it, it is easy to make yourself believe this improves the robustness of the above processing loop. Now, if anything goes wrong, it will try again some number of times depending on the high-level loop.

But as we’ve seen above, this really just makes a whole range of potential problems worse rather than better. There is no guarantee that the next iteration of the loop will even do the same thing that the failed iteration did. Instead of producing a file, the next loop could be deleting them. Rather than having a simple understandable error fall out of the application at the point of the original problem, we may end up doing all kinds of unpredictable things that are going to be impossible to diagnose or recover after-the-fact.

When you find code that contains general exception handlers, warning bells should be ringing. There is a reason there is an FxCop rule that triggers on this coding pattern. It is an evil pattern that must be exorcised.

The only valid fixes for “Exception” handlers are as follows:

  • Re-throw the original exception at the end of the handler (see “what to do?” above)
  • Throw a new exception that includes further details about the problem, and which must include the original exception as an inner-exception (see “what to do?” above)
  • Make the exception type more specific so that a problem that can be credibly recovered from is caught instead (and make sure the handling logic actually addresses that problem!)
  • Remove the handler altogether, and just let the exception mechanism do it’s thing

Some of these remedies edge into the territory of “Logical Errors” / “Exogenous and Vexing Exceptions” and my next post will dig much deeper into how to deal with those. That’ll be where the rubber meets the road on what many would consider actual exception handling, and what kind of exceptions you can declare and throw yourself (and how to do so).

Exceptions – 3

In my last post I presented a classification of exceptions by two Microsoft employees that should know what they are talking about. Here, I want to pick off the low hanging fruit and discuss just one of the categories of exceptions. A category that should not be caught.

Usage Errors” / “Boneheaded Exceptions” (also: Preconditions)
It may sound strange at first to say that any category of exceptions should never be caught. All documentation on exceptions keeps drilling home the message that exceptions exist to be caught and handled; unfortunately that isn’t true.

This category of exceptions serves the sole purpose of notifying the programmer that a method cannot be validly called with the given arguments and/or in the current state of the object it is called on. It signals that the programmer did not honour the preconditions of the method.

Handling such an exception is putting the cart before the horse. Let’s say we have the following method (note that the non-null condition is purely by way of a simple example; a better implementation would gracefully handle null strings):

/// &lt;summary&gt;
/// Manipulate a string in some fashion
/// &lt;/summary&gt;
/// &lt;param name=&quot;value&quot;&gt;
/// The string to manipulate, must be non-null
/// &lt;/param&gt;
/// &lt;returns&gt;The manipulated result&lt;/returns&gt;
public string SomeMethod(string value);

And we use this method in some code, only to discover later that due to logic elsewhere nulls can end up making their way to this method. After careful analysis, it turns out for null values we want the result also to be null, so the following code fragment is written:

string result;
    result = SomeMethod(someValue);
catch (ArgumentNullException)
    result = null;

See? All fixed. Isn’t it wonderful how we are handling the exception, and everything is perfect, right?

Of course this is wrong. The correct way to deal with this situation is to not violate the precondition in the first place and just do it right:

string result =
    someValue == null
    ? null
    : SomeMethod(someValue);

Whenever you feel tempted to handle a “Usage Error” / “Boneheaded Exception” you should immediately wonder why the code isn’t checking the precondition, or alternately designed to honour the precondition by definition.

Letting these exceptions fall out of your application is not a sign that you forgot to add an exception handler. It’s a sign that you didn’t write the code correctly. Rather than spending time writing exception handlers, put that time into guard conditions on the call. Not only will it make your assumptions much more explicit, it also performs much better. An exception will typically take 1000s of times longer to process than an equivalent guard statement on your call.

Catching these exceptions doesn’t just indicate you’ve lost control of your code. It also signals that you don’t really care about performance at all.

So, what to do?
Because these exceptions are never intended to be caught but rather to tell the programmer that a precondition was not honoured, the exception class matters very little. What does matter is to make sure that the exception message is very explicit about the precondition that was not satisfied, and in what way it was not satisfied. Make the message as long and wordy as you need to, because a programmer will need to be able to read it, understand what the problem was, and then fix the code.

Although the type of the exception does not matter much, because a programmer is going to mainly go off the message, there are a few exception types in the .NET Framework that are specifically suitable to be used for these.

class ArgumentException
    class ArgumentNullException
    class ArgumentOutOfRangeException
    /// various others ...
class InvalidOperationException
    class ObjectDisposedException
    /// various others ...

Use “ArgumentException” or one of its sub-classes to signal when an argument to a method does not satisfy a precondition. Some recommend using the most derived class that is appropriate to the error that occurred, but as long as you make sure the exception message will make complete sense to the programmer it is fine to just use “ArgumentException” itself and no others.

Use “InvalidOperationException” or one of its sub-classes to signal when an object property does not satisfy a precondition to the method call being made. The same advice goes for sub-classes here as for “ArgumentException“.

By just using these two exception (trees) it also is very easy to make sure you never end up catching a “Usage Error” / “Boneheaded Exception”. Creating an FxCop rule that forbids these types occurring in exception handlers should be a breeze.

Also note that for reasons not related to exception semantics, “InvalidOperationException” should probably never occur. An API that could credibly throw this exception for the reason outlined above is very badly designed and should probably be refactored. (An example scenario is a class that has a flag to indicate a processing direction (input / output) and methods that only are allowed to be called for one of these modes. A better implementation would have a general base-class for shared functionality and then subclass into an input and an output class that each only have suitable methods on them.)

How to fix Boneheaded Handlers?
Whenever you encounter code that handles a Boneheaded Exception “X”, corresponding to precondition “P” that looks as follows:

catch (X)

Replace the code with the following instead:

if (P)

(Note that this assumes that exception ‘X’ actually corresponds to a precondition ‘P’, and gets thrown before any functional logic runs. Otherwise side-effects may make the transformation more complicated.)

Next post I’ll pick off the next-lowest-hanging fruit; “System Failures” / “Fatal Exceptions”.

Exceptions – 2

At this point, it seems appropriate to put some terminology in place for my ongoing discussion of throwing and handling exceptions. As a matter of fact, I will be providing two sets of terminology for the price of one!

These sets of terminology are by Krzysztof Cwalina (leader of the effort to develop API guidelines) and Eric Lippert (senior member of the team designing the C# language). I think it’s fair to say that between the two of them there is a lot of experience with how to do things and how not to do them using C# and .NET.

Krzysztof Cwalina classifies exceptions into “usage errors” and “system errors“, the latter of which can be split into “logical errors” and “system failures“. Eric Lippert classifies them into “fatal“, “boneheaded“, “vexing” and “exogenous“.

All that may just sound like a jumble of (colourful) word-soup, so the following sections will make the terms and what they mean a bit more concrete.

Usage Errors” / “Boneheaded Exceptions
These exceptions signal that there is a problem the calling code could have avoided itself. As a result this is a bug in the caller, not a fault in the called code. Typically these are the result of broken preconditions and invariants.

public void DoSomething(string nonNullValue)
    if (value == null)
        throw new ArgumentNullException(&quot;value&quot;);

You must make sure your code passes this method a non-null value; if you do not, then the resulting exception is your own fault. You broke the contract.

System Failures” / “Fatal Exceptions
These exceptions cannot be handled under any circumstance. They signal a fundamental problem with the state of the virtual machine, such as “Out of Memory” or “Thread Aborted” or “Type Load” exceptions that can occur at almost any instruction in your program.

The only correct thing to do with these exceptions is to let them climb up the stack until they eventually terminate the program. There is nothing an application should try to do to recover from these, because there is no sensible way to recover. This is also why catching “Exception” is so heavily frowned upon.

There are ways to write general recovery handlers, but they have to follow a very specific pattern to make sense. More on that in a later post.

Logical Errors” / “Exogenous and Vexing Exceptions
These are the “real” exceptions. They indicate that the method could not make good on its promises in some fashion. You asked the method to open a file for reading, but the file doesn’t exist; too bad! You asked the method to parse a string into an integer, but there were letters in the string; oops!

The reason that Eric Lippert presents two options here is specifically for my second example. Some exceptions thrown by methods that cannot satisfy their contract indicate that the API was just badly designed; sometimes you have to expect certain failures, and code accordingly.

using (var reader = new StreamReader(&quot;DataFile.txt&quot;))
    var line = reader.ReadLine();
        var value = int.Parse(line);
    catch (FormatException ex)
        // Is this Exogenous or Vexing?

As the exception handler asks… is this exception exogenous, or vexing? I really can’t say, because it depends on the context. If the “DataFile.txt” was created by an end user, and is supposed to contain a single line with an integer value on it, then this is almost certainly a vexing exception, and use of the “int.TryParse(...)” method would have been more appropriate. Betting on a human-generated file to contain correctly formatted input is wishful thinking; you have to assume there may be problems.

If however that file were produced by another application, then this may very well be an appropriate way to deal with the situation. We can safely assume that if the other program produces an integer one time, it will likely do so every time, and if it doesn’t that is genuinely worthy of an exception and associated logic (albeit in reality probably a few levels further up the stack than my simplistic example).

Up Next…
In the next post I hope to distill down some initial advice on these three categories of exceptions. I will not treat vexing exceptions as a category from here on in, since a vexing exception really indicates an incomplete API that needs to be redesigned. Usually it is just a matter of adding alternatives that allow the caller to avoid the exception in favour of a more complex method contract, similar to the way the “TryParse” calls were added in the .NET Framework to many classes that didn’t have them before).

Back to Basics

For a while now I have been postponing writing a post about my progress regarding exceptions in software. I have informally formed an outline of an opinion, but I have been looking for a way to build a stronger foundation than “because I think that’s the right way to do it”.

Then, as I started straying further afield with my mind wandering over multi-threaded code, dependency injection, unit testing and mocking as well (and some others that I know I have forgotten), it occurred to me that I really should go back to basics with all this…

  • The most fundamental tool to reason about software correctness is still to think in terms of invariants over state-space and pre-conditions/post-conditions to method invocations.
  • Guides on “good coding practices” abound, but there are certain fundamental truths in most of them that are universal enough to almost be as good as “formal methods” to reason about “good code” beyond merely “correct code”.
  • Both the DRY principle (“don’t repeat yourself”) and a desire to produce self-documenting code further suggest that keeping as many perspectives on a single piece of code as close together as possible is the best way forward. The new .NET 4 Code Contracts already provide some unification between code, documentation and testing, but I think there is more possible that has not been exploited yet in this arena. Some tricks may be needed to keep aspects such as tests and documentation together with the code without overly burdening the generated assemblies with dead weight that does not participate in the execution of the code itself.

I strongly believe that C# as a language leaves us with too much flexibility in the general case. Every iteration of the language adds more interacting features, and opens up many useful possibilities as well as some that are dangerous or perhaps even plain wrong.

Some code patterns, although allowed by the compiler, just do not make any sense. There are usage patterns of exceptions that *will* compile, but really should be considered an error.

Tools like FxCop try to plug some of those holes by checking for such errors after-the-fact. Unfortunately, custom error conditions are not as easy to express in FxCop as I think they ought to be. But in principle this is definitely a path worth exploring to eliminate options that might be best avoided.

I think the rather nebulous state of this post reflects the fact that my mind hasn’t completely crystalised into a single vision of what combination of tools and paradigms I need to get to more ideal development practices. But I think I am starting to make some progress.

Exceptions – 1

As much as exception-handling is pretty much part-and-parcel of any modern programming language, resources and guides on how to most effectively and correctly use them is fairly thin on the ground.

I noticed recently that when I throw exceptions, or catch them, there is a sort-of-almost-there structure and pattern to the way I use them. But what bothered me is that I didn’t have a strong underlying philosophy regarding why I was using them in that way (and I bet my usage hasn’t been 100% consistent either).

So on this holiday I’ve kinda set out on some self-study to try and formulate what I think is the best way to use exceptions in C#/.NET … there’s some resources that provide good basic information in this matter, but none of them go to the point of taking it to the logical conclusion of fully thought-through recipes that will result in correct and consistent use; so that’s my goal at the end of this series of posts: an article that conclusively formalises my opinion on throwing and handling exceptions.

The initial materials that form the basis for these posts are as follows: