Day 133 – Surprising Jobs and Dice

No Dice

What could be simpler than dice?

If everyone has a fair un-weighted die, all have the same chance to win or lose.
P(A wins over B) = P(B wins over C) = P(C wins over A) = 0.5

You win half the time, you lose half the time.

We can change the dice to make the battle less equal. We can give some multiple sides marked 6, and others multiple sides marked 1. We can shift the balance to give A an edge over B, B an edge over C… and then overall C is the big loser.

Mathematically we call this the transitive property.
If A > B and B > C, then A > C.

Obviously and intuitively this must be true, right?
You’d be wrong in thinking so.

It is possible to design a set of dice where A on average beats B, B on average beats C and C on average beats A. We call these non-transitive dice.

An example of such dice could have sides marked (if you don’t want to click the link):

  1. Sides: 3, 3, 3, 3, 3, 6
  2. Sides: 2, 2, 2, 5, 5, 5
  3. Sides: 1, 4, 4, 4, 4, 4

Die 1 beats die 2 in 21 out of the 36 possible combinations.
Die 2 beats die 3 in 21 out of the 36 possible combinations.
Die 3 beats die 1 in 25 out of the 36 possible combinations.

Obviously, intuitively, this isn’t possible, yet there it is.

Best Man for the Job

Which is where a recent management dilemma comes into the picture.

I somewhat flippantly summarised my problem as related to explaining a potential budgeting issue due to non-transitive aspects to skill-to-project mappings. That was really a slightly hand-wavy allusion to the dice above.

Let’s make it a little more concrete.

Let’s say we have 3 IT projects:

  • Web project; no complicated coding, but it does need to look good
  • Banking system; routine processing, but it does need to be fail-proof
  • Page-rank; a very mathy problem, but once understood easy to code

Let’s say we have 3 Developers (all parallels to real people is coincidental):

  • One genius: Steve Yegge
    Great at almost everything, except graphical design
  • One average developer: Jeff Atwood
    A flair for design, and knows enough transaction-safety, but no maths-whizz
  • One manager who shouldn’t be coding: Bill Gates
    Passable design-sense, passed maths, but don’t trust him with your money

Let’s say we have to estimate how long it will take to complete the 3 IT Projects with the Developers we have available. We could specifically allocate names to the projects, but the projects are due to start at different points in the future that we do not know yet.

Well… how about we just estimate everything for a general average developer and then work out the names later? Intuition says that some will be over and some will be under, but on average we’ll get our estimates right… right?

Web Banking Page-rank
Yegge 50 days 10 days 10 days
Atwood 10 days 30 days 50 days
Gates 30 days 50 days 30 days

On average, each of the projects will take 30 days.
On average, Yegge is faster than anyone else.
On average, Atwood is average.
On average, Gates is slower than anyone else.

But let’s just step away from the averages and have a look at what could happen once reality unfolds in three different ways:

  • Yegge does Banking, Atwood does Page-rank, Gates does Web
    Result: 90 days total, 30 average per project
    A truly average result
  • Yegge does Banking, Atwood does Web, Gates does Page-rank
    Result: 50 days total, 16.7 average per project
    The best possible outcome
  • Yegge does Web, Atwood does Page-rank, Gates does Banking
    Result: 150 days total, 50 average per project
    A learning experience for everyone; also: OH GOD, WHY? IT BURNS!

So, depending on who is available when the projects kick off, either through chance or because we planned it to be that way, it could take as little as 50 days or as many as 150 days.

It could be 40 days under (almost 50%), or 60 days over (more than 60%).

And note that this is with a relatively mild skills gap.
Some developers are order-of-magnitude faster than others.

Also note that in this example I’ve glossed over the other project work that would make it impossible to assign Yegge to both Banking and Page-rank, which would leave Bill no coding to do (arguably an even better outcome for everyone).

Again, our intuition is wrong.
And for very similar reasons to the dice.

What this means is that it’s almost always a mistake to try to estimate project work based on an “average” developer. It is better to always have a specific developer in mind, and if that changes, adjust the effort estimate accordingly. You will end up with far fewer counter-intuitive results to explain after the fact.

Try Science!

And that brings me back to an earlier point.

If possible, always try science first. Even on problems that you cannot completely “solve” mathematically. An approximation will at least warn you if There Be Dragons before you get et.

Day 120 – Meeting

There’s nothing like 4 attempts at doing the same new thing in a single day. I don’t feel I was very consistent from one to the other, but they were all variations on a general theme. And I think I have some ideas of what I should be taking into the meeting myself.

I’m a big fan of taking notes collaboratively.

I would like to use a OneNote document in a location the direct reports and myself share, so I can create a new page for each weekly meeting and have a separate tab to track outstanding action items and future plans. Unfortunately a slight technical difficulty is preventing me from doing so. According to SharePoint, OneNote documents aren’t quite like any other document. Who knew?

For now, I am using Lync to screen-share an email with the developer, and then I type up the notes as I go along. No hidden agendas, and we both keep a copy of the notes afterwards. It also means that the notes to share are typed up right at the end of the meeting, because I write them as we go along.

I really recommend always keeping meeting notes as you go along and emailing them out straight afterwards. Notes that need to be “typed up later, and I’ll get them to you ASAP” have this habit of ending up on the back-burner, and the backlog just nags at my sense of what I should be doing. It’s less mental effort to do it along the way and then be done as soon as the meeting finishes.

The content needs some tweaking. I figured out different things that I should be asking everyone in each subsequent meeting. I’m keeping notes on these points so that I can be better about that in future meetings.

I also need to see if I can book a room for the not-me side of the meeting. I don’t mind taking frankly wherever I happen to sit, but I really want to be able to give everyone the opportunity to have privacy if needed. Anything to minimise barriers and discomfort.

Also, I’m wondering whether I should arrange similar weekly meetings with my direct business customers. I think the format and structure have something to offer to keep in touch with business needs and keep them informed of what is going on. I might go gauge interest tomorrow.

Now it’s the end of a very full day though, and all other thoughts can wait till tomorrow.

Why You Should Walk


But first a short digression.

Have you ever heard of Scrumpy? No? … Go to your local liquor store, do yourself a favour, and look for some.

Essentially, it’s apple cider, only better.

You know how a good cider can have just the right amount of sweetness? Not too sickly sweet, but also not too tart or sour? … You know how it just smoothly slides down your throat and makes you feel like one is never enough?

It Kicks
It Kicks

Okay, Scrumpy does away with all that; it is slightly too bitter, it rasps your tongue in a not entirely comfortable way, and it makes your head swim like the bouncer at the local club just decided it wasn’t your day… and yet… you still want a second one. Make that a third!

Scrumpy is a disorienting experience that’s very hard to resist. And it’s also why this post probably will not make as much sense to you as it did to me inside my head as I was writing it.


I have walked regularly for a while now. I think it was the FitBit that kicked me into gear. It may sound silly, but after a while with a FitBit I have started to want to please the dashboard. Every time it sends me a big smiley face when I make my goal for the day, it reinforces my desire to walk the next day. To make that 10,000 steps. To make the very active minute quota.

But I bore easily, so added on top of that I’m trying to walk as many streets as I can. Some I hit more often than others, but I try to add side-streets that I haven’t tried on each subsequent walk.

It is proving a great way to see things I normally don’t see.

To get a perspective I normally lack.


I still remember one of the earlier visits by my mother to Australia. (I’m originally from the Netherlands, so family have to make a 20-hour journey to come see me)

I had just started renting all by myself, and as you might imagine, being all by myself on the other side of the planet, furniture wasn’t accumulating like it does when you have aunts, uncles and grand-parents with things they don’t really need any more (that is: want an excuse to replace… any excuse will do).

After talking about the local home-maker centre I had been to in Prospect, we decided to go have a visit and look at some lounges that I had seen.

We took the train to Blacktown, and had a walk from there.

Anyone that knows the area will already see the problem with this statement. But we asked passing cars about the distance, and they keenly suggested that it surely wasn’t more than a 15-20 minute walk away.

We don’t mind walking. So we walked.

For those less familiar with Australian/Sydney geography; the distance between Blacktown Station and Prospect Home-Maker Centre is about 4.5km; a good 60 minute walk in a hilly suburb.


My point is this.

When your main way of moving about is in a car, you lose your sense of perspective about the world around you. Everything seems closer. Everything seems smaller. Because the point about driving is to minimise the amount of time you spend in your car.

Larger Than Life
Larger Than Life

What made me think of that story on my walk today were the signs and toll gates along the M7.

How large would you say they are?

Okay, your car needs to fit through, so larger than a car. Actually, larger than a truck. But how large really?

Who cares! Zoom! Look, I’m through.
Who cares how large it is!?

But as I was walking along the M7 today, the path came level with one of those great big square green signs we all see every day as we drive past an exit. How large would you say one of them is?

Standing right next to it, I was startled to realise it was easily twice as tall as I am. And just about as wide. And yet, even standing next to it, in my memory it was much smaller.


I am starting to notice how many details I miss in the car. Beautiful spots along the way, comfortable benches set away from the traffic, short-cuts carved out between blocks of houses.

The problem with driving is that it is about destinations. There is nothing separating A from B, other than a high-speed blur that distorts the world.

It’s hard to get an appreciation for how large your neighbourhood is until you try to walk through it. Around it.

And in the process you’ll discover that between A and B there are stretches of distances that also contain places.


And you’ll get to enjoy noticing things you otherwise never notice. And you’ll get time to see things you otherwise only glimpse. And you’ll get to smell the world! Which in summer can be a very worthwhile experience in its own right.

And then… once you get home… you’ll get to crash on the lounge in front of Breaking Bad and do nothing, smug in the knowledge that you did more today than most other people did.

Day 70 – Final Demo Prep

Tonight I have spent another evening doing preparation work for the demo I am presenting tomorrow. Maybe I’m a bit too much of a perfectionist, but it just doesn’t feel right to give a presentation without planning out what is important to cover, and how I make it all into a cohesive whole.

And then fit it within the allotted 20 minutes of time.

There will be other sessions that other developers may present on the detailed intricacies of parts of the material I am just going to gloss over, but I still need to make sure this presentation paints a compelling picture why our common libraries add so much value to our software development.

I’ve decided on a mix of slides to paint the broader context, and hands-on demos for the pieces where “show, don’t tell” will bear more fruit.

And I’m hoping I can get through the material in closer to 15 minutes, so that there are also 15 minutes for questions not 10.

I have a suspicion that there will be questions about features I will not show. And any attempt to comprehensively cover them all would probably result in a full day presentation not just half an hour. I’d rather have questions drive a few more ad-hoc demo segments and be sure that I’ll cover whatever interests the people attending.

But now… I feel a little hazy, and I should stop thinking about this.

I went to another cycle class after work and before this prep-work. Three classes into cycle, it feels like it is getting easier but I still have a serious case of gym-brain.

Day 69 – Not As Fun As It Sounds

I am still a couple of months away from the first anniversary of my new management job. Over the past 10 months I have barely touched an IDE for anything more than a quick play.

Yesterday I installed Visual Studio 2013 RC, and today I am opening some libraries that have become only dim memories to have a poke around.

This Wednesday I am due to do a 20 minute presentation on the common code libraries that a lot of our applications were built on. I’m the expert on account of having written the bulk of the code in it. I am desperately trying to change that situation by encouraging them to replace anything that they consider broken or insufficient.

Although this angle hasn’t borne too much fruit yet, I’m now doubling down by adding some education to hopefully make the work ahead more visible and more in-demand.

In short, I’m kinda-sorta having a good evening with code that I haven’t seen in what feels like years. However, it *is* work, and I should be watching TV, reading a book, or shooting something.

I hope to find some time in the near future to do something substantive with the IDE though, and hopefully drop it into a public repository somewhere.

Day 51 – Fractional Resource Plan

The Problem

I like being systematic and scientific about things where I can. Today I had to figure out what could be done to add bandwidth at work so we can take on more projects.

Let me paint a hypothetical that has slightly easier numbers than the real-world scenario I was working through.

Let’s say:

  • I have no spare bandwidth across my existing employees
  • I have 100 days of work to do
  • Contractors are going to take about 1.5 times as long due to lack of experience with the systems to complete tasks
  • Contractors need oversight, and for every day of original effort I will need to leave an employee on the work for 1/4 day to supervise and explain things

How hard can this be?


I just needed a number, so let’s start from the top:

  • 100 days of work done with contractors will take 150 days
  • 100 days of work will leave 25 days in-house for oversight

Done! … Oh, wait… the reason I was doing this was because I had no in-house resources left, so I need to contract out 25 days of already planned in-house work to free up in-house resources for oversight:

  • 25 days of work done with contractors will take 37.5 days
  • 25 days of work will leave 6.25 days in-house for oversight

Ah, but again, the same problem… still… 6.25 days is almost negligible, I think one more iteration and we’ll have this worked out:

  • 6.25 days of work done with contractors will take 9.375 days
  • 6.25 days of work will leave 1.5625 days in-house for oversight

And let’s just call that a rounding error. So the total then becomes:

  • Contract resource: 150 + 37.5 + 9.375 = about 197 days
  • In-house oversight: 25 + 6.25 + 1.5625 = about 33 days

It looks like we end up with an endless series of ever-more-fractional contract resources that we need to hire, but luckily it converges quickly. But I’d rather not work this out step-by-step in the future.

Mathematics to the Rescue

I decided that working out the formulas to get straight to the totals based on variable parameters for oversight and the contractor-multiplier would give me a handy tool to assess ballpark figures in the future, so here goes.

I got out some of my old University textbooks to get some inspiration for the right approach to work this out. Luckily my instinct to go straight to my number theory books quickly delivered a winning strategy.


To make the following formulas easier to work with, I use the following definitions throughout:

  • R = the number of in-house effort days to fill with contractors (100 in the example above)
  • M = the effort-multiplier when using external contractors, the more experienced and knowledgeable the contractors are in the systems they will work on, the closer this number approaches 1.0 (1.5 in the example above)
  • S = the supervision-multiplier for the amount of in-house effort that stays in-house as oversight (0.25 in the example above)
  • O = the cumulative total number of in-house oversight days needed to supervise the contractors we need
  • C = the cumulative total number of contractor days needed to resource the extra work

Total Oversight “O”

The formula for “O” is slightly easier to derive, so I’ll start there. First, I’ll quickly recap the example I worked through before:

  • O = 25 + 6.25 + 1.5625 + …
  • O = 100*0.25 + 25*0.25 + 6.25*0.25 + …
  • O = (R*S) + (R*S*S) + (R*S*S*S) + …

To get a completely accurate result I would have to keep going on indefinitely adding further ever smaller pieces of employees. (I hope HR doesn’t find out about my plans to slice employees into ever more fractional parts!)

With a bit of trickery though I can collapse that infinite formula into a much more compact version:

  1. O = (R*S) + (R*S*S) + (R*S*S*S) + …
  2. O = (R*S) + S * ((R*S) + (R*S*S) + (R*S*S*S) + …)
  3. O = (R*S) + S * (O)
  4. O – S*O = (R * S)
  5. O * (1 – S) = (R * S)
  6. O = (R * S) / (1 – S)

Step 1 is our original formula. In step 2 I isolate the first term and split out one “*S” from the remainder of the infinite series. The piece between brackets at the end of step 2 looks very familiar… it’s exactly what “O” is defined to be in step 1! In step 4 I subtract “S*O” from both sides of the equation. In step 5 I factor out the “O” on the left hand side. And a quick division takes us to step 6 with a three-operation formula for the oversight component.

Now, note that the division by “(1 – S)” from step 5 to 6 is obviously only valid if S is less than 1. But then, if your supervision factor is 1 or above there is really no point trying to bring contractors in. I’m sure we’ve all worked with people like that.

Unless you are in the enviable position of having contractors available that are so good they do not need supervision, then S will also never be too close to 0.

My gut feel at the moment is that realistic values for S probably are somewhere in the range from 0.2 to 0.5; where it falls exactly depends on the complexity of the work you want to get done, and how much of the work you are prepared to entrust to contractors.

Using this formula on my original hypothetical suggests that in-house oversight would actually be: O = (100 * 0.25) / (1 – 0.25) = 33.3333…

Total Contract Resource “C”

After explaining the derivation of “O”, doing “C” is actually a little easier. Going back to the example again for a recap:

  • C = 150 + 37.5 + 9.375 + …
  • C = 100*1.5 + 25*1.5 + 6.25*1.5 + …
  • C = (R*M) + (R*S*M) + (R*S*S*M) + …

Solving this infinite series of fractional contractors follows a similar pattern to my earlier attempt:

  1. C = (R*M) + (R*S*M) + (R*S*S*M) + …
  2. C = (R*M) * (1 + S + S*S + …)
  3. C = (R*M) * (1 + S*(1 + S + S*S + …))
  4. C = (R * M) + (R*M)*S*(1 + S + S*S + …))
  5. C = (R * M) + S * (R*M) * (1 + S + S*S + …)
  6. C = (R * M) + S * C
  7. C – S*C = (R * M)
  8. C * (1 – S) = (R * M)
  9. C = (R * M) / (1 – S)

I won’t explain the steps, because the tricks used are pretty much the same as in the previous formula.

The bounds on S were already explained above. M should never normally be less than 1, unless you make a habit of hiring the wrong people. How high M could get depends on how much you are prepared to spend on a contractor.

Once again, if you are in the enviable position of having a contractor that doesn’t need supervision because they are already familiar with your systems due to working with them before, then M could theoretically be 1.

If you are solving this problem for S = 0 and M = 1 though, I suggest that you just hire the contractor already because they are probably practically working for you anyway… and a salary is going to cost you far less than contract rates.

Realistically I’d say M is probably somewhere between 1.25 and 2. With values greater than 2, I’d suggest spending a little more and getting a better contractor instead.

Using this formula on my original hypothetical suggests that the required contract resource would actually be: C = (100 * 1.5) / (1 – 0.25) = 200.


To summarise, assuming you have no employees available at all in-house, then you can calculate the amount of contract resource and oversight you are going to need by picking:

  • R = number of days an employee could do it in
  • M = effort-multiplier for contractors (probably between 1.25 and 2)
  • S = in-house supervision-multiplier (probably between 0.2 and 0.5)

Then the total amount of in-house resource you will spend is: O = (R * S) / (1 – S).
And the total amount of contract resource needed is: C = (R * M) / (1 – S).

These are obviously fairly idealised formulas, but by playing around with the values of M and S around where you think they should be will give a pretty good sense of what the range of possible real-world outcomes might look like.

And if you have some amount of employee resource available, you’ll probably always want to use that first and subtract it from R before doing anything else.

Day 42 – Ideas vs Execution

Today will be header-image-less, because I just didn’t have inspiration. First days back from leave are always over quicker than any other day, so I didn’t have as much time to work with as usual. And I feel oddly tired.

In one of my many meetings today I had an opportunity to talk up an initiative I’m running with, and I always feel a bit self-conscious about that. Taking credit doesn’t come very naturally to me, whether it is due or not. Maybe it has something to do with having an inclination to gravitate towards “what needs to be done” and then doing it. It just doesn’t feel like it requires congratulations.

Stepping a little while back, I got congratulated for my efforts on a major project. Frankly it didn’t feel its success had a lot to do with me. Granted, I’d nudged it into a development structure at the start that seems to have paid off very well, but ideas are cheap, and to my mind some members of the development team have done much more to run with that particular ball than I myself have.

Execution trumps ideas every time.

Which is why it’s so funny that even today’s case (a knowledge sharing initiative) where the idea wasn’t mine, but the bulk of the legwork was, still makes me uneasy when it sounds to my ears I might be taking the credit.

At some point I’m really going to have to give myself a break and unabashedly take credit for something.

I just can’t help looking around for someone to object.

Floating Point One-Pager

Broadly speaking this post will go into increasing technical detail so that you may stop reading at the point where you either:

  1. Lose Interest
  2. Find the Answer you are After
  3. Lose the Ability to Follow My Garbled Writing

Note that regardless, this is by no means an exhaustive exploration of the topic. There is some significant glossing-over-details; I have merely tried to put “The Rules” into some context, and point the way to some problems you might want to delve deeper into by reading the references at the bottom for yourself.

After studying and digesting articles for a week, the only thing I can be certain of is that I have never fully comprehended all the dangers inherent in floating point numbers. And I never will. And neither, probably, will you.

All we’ll be able to do is take as much care as possible, and hope for the best.

Now, let’s go limit the damage we all might do.

Into the lair of IEEE 754 we go!

The Rules

If you don’t want to think, if you just want some rules-of-thumb to blindly follow, this is the section for you.

  • Multiplications and divisions are safe
  • Series of additions and subtractions should be ordered to operate on values of the closest magnitude first where possible
  • Do not compare floating point numbers directly
  • Do not use “Epsilon” to compare floating point numbers either
  • Use bankers rounding (round-to-even) whenever possible
  • Run code as 64-bit executables whenever possible
  • Never make any assumptions about how many decimal places are “good”, no, not even 2!

Comparing the Wrong Way

Most, if not all, programmers know that directly comparing two floating point numbers for equality does not work. If they are almost-but-not-quite-completely the same number, the code will fail.

0.1 x 10 != 1.0

Rounding errors along the way will kill any exact comparison.

The most common solution that gets suggested is to check if the absolute difference between two numbers is smaller than some tiny amount, and declare the numbers equal if that is the case:

if (Math.Abs(a - b) < 0.00001)
    ... NOTE: WRONG WAY TO DO IT! ...

if (Math.Abs(a - b) < Double.Epsilon)
    ... NOTE: WRONG WAY TO DO IT! ...

The former will work or not depending on the magnitude of “a” and “b”; if the two numbers are large relative to the “0.00001” bias it will work, but if they are smaller or even close to the bias itself, then the code will suddenly start failing.

The latter will probably never work; it is essentially the same as saying the two numbers need to be exactly equal, because “Double.Epsilon” is the smallest representable fraction in doubles, and the only thing that is smaller than the smallest representable fraction is equality.

Do not use either of these approaches.

Comparing using Relative Error

It is much better to do a comparison based on relative error, because the difference will scale with the inputs; the author of The Floating Point Guide recommends the following:

static bool RelativeCompare(double a, double b, double epsilon)
    // Compare NaNs and infinites
    if (a == b)
        return true;

    // Where relative error is meaningless use a direct epsilon
    double diff = Math.Abs(a - b);
    if ((a == 0) || (b == 0) || diff < double.MinValue)
        return diff < epsilon * double.MinValue;

    // Otherwise use relative error
    return diff / (Math.Abs(a) + Math.Abs(b)) < epsilon;

See The Floating Point Guide for a more detailed explanation and a link to an extensive test-suite for this code.

Comparing using ULPs

The most advanced solution to comparing floating point numbers is based on a key trick; when you re-interpret IEEE 754 floating point numbers as integers, the difference between two of those integers represents how many different floating point numbers exist between the two that were converted.

If you want a bit more explanation I suggest reading the 2012 edition of Comparing Floating Point Numbers by Bruce Dawson.

A .NET implementation of the relevant algorithms:

// Slow but safe version
static long DoubleToLong(double d)
    return BitConverter.ToInt64(BitConverter.GetBytes(d), 0);

// Fast but "unsafe" version
static unsafe long DoubleToLong(double d)
    return *((long*)&d);

static long UlpDistance(double a, double b)
    long intA = DoubleToLong(a);
    if (intA < 0)
        intA = Int64.MinValue - intA;

    long intB = DoubleToLong(b);
    if (intB < 0)
        intB = Int64.MinValue - intB;

    return Math.Abs(intA - intB);

static bool UlpCompare(double a, double b, long distance)
    return UlpDistance(a, b) <= distance;

The UlpDistance function returns the distance between two double precision floating point numbers. If this returns:

  • 0 – the floating point numbers are exactly identical
  • 1 – the floating point numbers are adjacent
  • 2 – there is one possible floating point representation between them
  • etc.

And by using the UlpCompare function you can indicate how close two floating point numbers must be to be considered equal, in a scale-independent way. In essence, the magnitude of the bias scales with the size of the values being compared.

A distance of about a million is enough to reasonably establish equality.

Problem Scenarios

Multiplication and Division

It may at first seem counter-intuitive that multiplication and division would not pose any problems. The reason for this is fairly simple though.

To illustrate, lets work in decimal and pretend we have 4 significant digits in our floating point representation:

(2.131 x 102) x (1.943 x 10-4)
= (non-rounded interim value) 4.140533 x 10-2
= 4.141 x 10-2

Even when two floating point numbers have wildly different exponents, the act of multiplying or dividing simply shifts the exponent of the result. And on either side of the calculation we have a full complement of significant digits.

Addition and Subtraction

For exactly the same reason, additions and subtractions can be problematic when dealing with big differences in exponents:

(2.131 x 102) + (1.943 x 10-4)
= 213.1 + 0.0001943
= (non-rounded interim value) 213.1001943
= 213.1
= 2.131 x 102

With a single addition or subtraction this loss of precision is unavoidable, because at the end of the calculation everything has to fit again into a finite amount of precision.

However, if you add a large series of numbers it suddenly becomes important to think about your additions:

(2.131 x 102) + (1.943 x 10-4) + … 999 times the same number
= 213.1 + 0.0001943 + … 999 more
= 213.1 + … 999 more
= 213.1 + 0.0001943 + … 998 more
= …
= 2.131 x 102

Every time an addition is performed in the CPU, the result needs to fit within our floating point representation and gets rounded to the original value. If we however perform the additions the other way around:

(2.131 x 102) + (1.943 x 10-5) + … 999 times the same number
= 213.1 + 0.0001943 + … 999 more
= 213.1 + 0.0003886 + … 998 more
= …
= 213.1 + 0.1943
= 213.2943
= 2.133 x 102

It’s not a big difference in this case, but depending on the number of additions involved and the relative magnitudes of the numbers it can make more or less of a difference. And all for the sake of re-ordering some additions.

If you were really keen you could probably develop some LINQ extensions in .NET that automatically re-order addition and subtraction sequences into the sequence in which the result is most accurate.

For now the point is: consider whether the values you are adding have wildly divergent magnitudes, and where possible try to order them to keep values with the same magnitude closer together to optimise for the most accurate result.


Have you ever used Math.Round in .NET? Were you a little confused at first? So was I.

You see, it uses Bankers Rounding by default (overloads exist where a rounding algorithm can be specified). When rounding 2.5, you’d usually expect to end up with 3, but Bankers Rounding doesn’t round all .5s equally. It rounds towards even, so 1.5 becomes 2, but 2.5 also becomes 2, then 3.5 becomes 4, and so on.

This is actually a very sensible default, but it isn’t well explained. In most cases the default for IEEE 754 operations is the same for the same reasons.

The problem with “normal” rounding is that it rounds 4 values down (0.1 – 0.4) and it rounds 5 values up (0.5 – 0.9). For one rounding operation this isn’t too big a deal, but most calculations round multiple times along the way. The compounding effect of this rounding bias is that results can slowly creep upwards with every rounding.

Bankers Rounding however on average rounds just as much up as it does down. As a result repeating rounding along the path of a calculation will jitter up just as much as down and on average not introduce any bias.

If you have end-users that might be confused by Bankers Rounding, then try to restrict “normal” rounding only to the last operation before display and keep using Bankers Rounding internally.

Representational Error

Below in the technical bits there is an explanation why IEEE 754 floating point numbers cannot precisely represent the value 0.1; it results in an infinite binary expansion:

0.1 (decimal) = 1.1001100110011… (binary) × 2−4

As a consequence it is safe to say that most real-world values cannot be accurately represented in double precision floating point numbers; as soon as you parse a value from a text file or a NUMBER from a database into a double, it is no longer completely accurate.

Basically, any real-world value becomes inaccurate before you even perform any operations on it.

The only reason this isn’t patently obvious all over the place is because floating point numbers tend to get rounded again before display, and in most cases this rounds away the error that this parsing introduces.

0.1 (decimal) becomes 0.10000000000000000555111… once it is parsed. Display routines will never show that many decimal places for a double, but it’s only a matter of a few calculations before that error creeps into decimals that do show up. Nine additions of the same number is all it takes to turn this into a real error.

It is important to remember that errors are pervasive and unavoidable.

It’s all about strategically structuring code to limit the ways in which these errors can compound and inflate.

Loss of Scale Issues

This is a particularly insidious problem, because it tends to not be evident when a program is first written. It literally takes time for this problem to become evident, typically once a program is running in production, or long after a library is first written.

This problem exhibits itself when measuring elapsed time (since application start, or some fixed point in the past), or when data volumes might increase over time.

When the elapsed time is small, or calculated values operate on limited data, this means that the magnitude of values will also be small. As a result, a large amount of the precision of a float is still in the fractional part of the result.

As time passes, as data volumes grow, as the magnitude of results grows, less and less of the floating point precision is represented in the fractional part.

If you assume your values will always have 2, 3, 5, any number of accurate decimal places, you may one day be taken by surprise when that stops being true. A single-precision float has only 6-9 significant digits. Once the magnitude of your value goes into the millions, you cannot count on any accurate decimals. Doubles have a bit more headroom, but it is still not impossible to run out of headroom with 15 significant digits.

And your calculations will start showing visible inaccuracies in any decimal places well before you hit the limits on the result.

The Technical Bits

Double precision floating point numbers are 64-bit numbers, laid out as follows:

Double precision floating point bits

The meaning of these bits is expressed in the formula in the header of this post:

Double precision floating point interpretation


  • The sign-bit is set for negative numbers
  • The fraction is applied as the binary “decimal” places in: 1.xxxxx…
  • The exponent is used to multiply this fraction across 2-1022 to 21023

As a result, double precision can roughly express:

  • 15 significant decimal places of precision
  • Values as small as ±5.0 × 10−324
  • Values as large as ±1.7 × 10308

Magic with Integers – 1

Less obviously, but more remarkably: this carefully designed binary format means that if you re-interpret two doubles of the same sign as 64-bit integers and subtract these integers, the difference will tell you how many distinct double precision floating point numbers lie in between. (0 = equal, 1 = none in between, 2 = 1 in between, etc.)

Conversely, if you increment or decrement the integer representation of a floating point number, it will give you the very next larger or smaller floating point number that can be represented.

Magic with Integers – 2

Between using the fraction bits with a leading 1 and an exponent of 1075 (corresponding to 252), and the de-normalized values including 0, doubles can contain exact representations of all integers from -(253) up to +(253). Within these bounds all arithmetic with integer inputs and results will be exact at all times.

Because of the 52-bit fraction though, once you pass 253, you will only be able to exactly represent even numbers. And past 254 only multiples of 4, and so on.

The distance between representable integers doubles past each power of 2.

Floating Point Spacing

This is also true in the opposite direction, where up to 252 all halves can be exactly represented, and up to 251 all quarters, and so on.

If you were to plot marks of exactly representable integers in the double precision floating point domain you would have a ruler where towards the left, passing each power of two, the marks get twice as close together and to the right, passing each power of two, the marks get twice as distant.

Double Precision Floating Point Ruler
Double Precision Floating Point Ruler

Representation Difficulties

The biggest problem with binary floating point numbers is that we live in a decimal world. Our reality has a nasty habit of thinking in terms of tenths, or hundredths, or thousandths.

Per the formula for floating point representations, something simple can prove impossible to represent:

0.1 (decimal) = 1.1001100110011… (binary) × 2−4

To exactly represent a decimal 0.1, you need an infinitely recurring fraction in binary. As a result, the closest possible representation in binary that fits within a double translates to approximately 0.10000000000000000555111… in decimal.

And depending on how these tiny inaccuracies stack it might get better or worse.

If you multiply 0.1 by 10.0, you get an exactly represented 1 due to some fortuitous rounding. If you add 0.1 ten times however, you get something slightly above 1. The type and order of arithmetic operations can either make the problem worse, or purely coincidentally, as if it never existed.

Internal Representation Trouble

If all this wasn’t difficult enough yet, in a well-meaning effort to improve accuracy, the x87 floating point instructions have 80-bit internal registers available. Arguably use of this extra precision does not harm the accuracy of calculations, but depending on when they do or do not get used, two applications executing the exact same sequence of operations may provide different results with no easy way to determine which of the two is the more accurate result.

It goes something like this:

  • Calculations that remain in internal registers use 80-bits
  • Except when they get written to/from memory, in which case they are rounded to 64-bit. When and whether this happens depends on how good the optimizer of the compiler is, and it may alter from version to version of the compiler itself.
  • Or if the FPU is switched into a mode where it forces 64-bit precision internally. From what I have read it is highly likely the .NET runtime does this, but none of the sources are committing to saying so outright.
  • Also, 64-bit executables will be 64-bit only because they will be using SSE and not x87, which has never and will never have 80-bit registers.

All-in-all, it is best to try and force applications to use 64-bit internally to ensure repeatable behaviour. And while at it, switch to a 64-bit version of Excel, and hope that Microsoft does not decide to do anything clever internally.

These discrepancies are the most annoying in dev/test scenarios where developers and testers get different results. It can be very hard to be certain whether a difference is due to discrepancies in accuracy, or if either algorithm is incorrect.

Decimal To The Rescue!

Working in .NET, one possible solution is to use Decimal type wherever possible.

The reason this helps has less to do with the fact this type uses 96 bits for the representation of the digits. It has a lot more to do with the exponent. The formula for Decimal is as follows:

(-1)s × (96-bit integer number) × 10-e, where e = 0-28.

The exponent is applied to powers of 10, which means that commonly occurring real-world fractions can be exactly represented. If you deal with money, a decimal can contain 2 exact fractional digits.

The only down-side is performance; you pay for these awkward-for-computers-to-process powers of 10 with up to 20-fold reduction in performance for arithmetic operations. This is definitely a worst-case-scenario number, but regardless it is a steep price to pay.


If you want to investigate further yourself, consider the following sources:

  1. Wikipedia: Double-precision floating-point format
  2. The Floating-Point Guide
  3. What Every Computer Scientist Should Know About Floating-Point Arithmetic
  4. IEEE 754-2008

Understanding Code

Spot The Invariant

Writing correct code in itself is not the most difficult and important thing programmers have to be able to do. By far the more difficult and crucial part of my profession is understanding code.

Understanding code is a factor in all of the following activities:

  • Finding bugs in existing code
  • Enhancing existing code
  • Using libraries from new code
  • Keeping a mental model of a system

And all of these activities are distributed over larger parts of a code base, whereas writing correct code is essentially a very localised activity. (Note that designing the architecture for an application or system can be much more difficult, but only needs to be done once, and is therefore not as big a part of the programmers life as understanding is).

And understanding isn’t always easy…

static int calculate(int n)
    int a = 0, b = 1;
    int x = 0, y = 1;
    while (n != 0)
        if ((n & 1) == 0)
            int temp = <span class="hiddenGrammarError" pre=""><span class="hiddenGrammarError" pre="a ">a;
            a</span></span> = a * a + b * b;
            b = temp * b * 2 + b * b;
            n = n / 2;
            int temp = x;
            x = a * x + b * y;
            y = b * temp + a * y + b * y;
    return x;

In fact, this fragment goes out of its way to give as few clues about what it does as possible, and yet it implements a very simple and well-known calculation.

Two Viewpoints

Every language makes trade-offs in its syntax between being terse and being understandable. Perl[1] is a famously terse language, where randomly mashing the keyboard is almost guaranteed to result in a valid program, whereas Java is well known to be verbose but relatively easy to understand.

Terse syntax has many obvious benefits. The terser the syntax, the more compact the code, the more source will fit on a single screen. And the more source fits on a screen the broader the overview you can get at a single glance over a fragment of code.

This is the rationale of many modern languages like Boo and Python and Ruby for example.

It seems to me though that the danger of making languages terse is that it optimises for ease of local understanding over easy of remote understanding. No matter how brief the source of a method is, if the method signature doesn’t give (m)any clues about the encapsulated functionality then using the method from another location in the source becomes needlessly difficult.


Naming is the first line of defense against incomprehensible code, and the only one that is guaranteed to exist in all languages (as far as I am aware). Classes get names, methods get names, often parameters get names. If these names are chosen well, both local and the global understanding are improved. This is why naming is such a big deal, and everyone intuitively understands this.

But often, names alone do not make the full constraints on parameters clear. This is where strongly typed languages have a further benefit. And even more so if there is at least optional explicit typing. Every parameter and return value that has a type implicitly tells something further about the meaning about the method (we’re operating on a collection, it holds strings, the result is a number, etc). I concede that small applications may be able to get away with pervasive dynamic typing, but larger systems often compensate by adding unit tests whose implicit purpose is to make sure methods operate on the right type of arguments and return the right type of results. But really, moving the typing information out of the method signature and into tests is in my opinion not progress.

Beyond this there are many more mechanisms that variously are employed to make code more understandable (documentation comments, code contracts[2], unit tests, word documentation), but as you stray further and further away from the actual source code and method signatures itself, the connection between the code and it’s constraint becomes ever more tenuous. Often this requires intervention from the IDE and third party tools and build checks, which move ever further away from the point where a developer is trying to understand the code in front of them.


I don’t really have a one-size-fits-all answer or a ready-made recipe to make code better, but here are a few suggestions.

  • Use good names – don’t just look at this from the local perspective; what would a method invocation look like? Is the invocation self-documenting? For methods with multiple parameters, try and use the parameter names at the invocation site if your language will allow you to. Consider using names to compensate for the lack of other mechanisms; if your language of choice has no explicit typing, maybe calling an argument “stringsToEliminate” is a good idea? Maybe calling an argument “positiveInteger” is a good idea? But don’t make anything more specific than it needs to be.
  • Use good types – even when typing parameters, pick the broadest type that will work. In .NET the compiler will keep telling you to use collection interfaces rather than explicit collection types. And sometimes creating a new type just so you can make one or more signatures more explicit may be a good trade-off; if a method only should take a value generated from a specific collection of other methods, then making the type “double” is probably not as good an idea as using a custom “Temperature“.
  • Use good tools – if you use any other mechanisms outside the language per-se, such as code contracts or source documentation, then you must have supporting tools as well. Source documentation is useless unless you have an IDE or tool that can “transport” this documentation from the method it is on to the place where it is invoked. Once you’re forced to go to the source of the implementation, odds are that a terse method implementation could more concisely explain what is going on already.

But first and foremost, pick a language that is suited to the problem at hand. Python is not a bad language, but I’m not sure it’s a good language for systems programming; it probably is better suited to web systems that often have many small self-contained features. Brevity is not a bad thing, as long as it doesn’t force you to work around an inability to use good names, or good types, or good tools.

Back to the Example

So, what of the example at the start of my post? It would have been much less opaque had the signature been along the following lines…

/// <summary>
/// Calculate the n-th Fibonacci number using an
/// O(log N) algorithm.
/// </summary>
static int Fibonacci(int n)
    Contract.Requires(n > 12, "Result will overflow int");

Making the implementation itself easy to understand requires a few pages of explanation, and I’ll leave that as an exercise to the reader.


  1. Apparently, Perl users are unable to write programs more accurately than those using a language designed by chance – source: Lambda-the-Ultimate
  2. As of .NET 4, there’s a very useful Code Contracts mechanism available to C# developers; there are even plug-ins for the VS2010 IDE that can make these contracts visible as an implied part of the pop-up method documentation

On Code Libraries

Libraries of common code have a bad habit of growing wild. I currently maintain one such library, and at just under 25k lines of code it is getting a little unwieldy and every change risks unintended consequences.

All the time the library seems to be gobbling up more and more additional code adding complexity and further headaches.

The Problem

In an ideal world the current version of the library would only contain code that is actually used by more than one application. The central purpose of a code library is to avoid duplicated effort.

In reality what has happened is that any variation on any theme that already exists in the library gets sucked into the library as well, regardless of whether there will ever be a second use for said variation.

The problem lies in the structure of the library. It wasn’t written to be modular in the Dependency-Injection sense.

Let’s say the library contains a carefully crafted component to deal with file IO, from monitoring an input directory, to file locking, to archiving and clean-up. Then an application comes around and it needs exactly that functionality… exactly… except, the scanning algorithm is all wrong; the files need to be returned sorted on that name between those underscores at the end there, see?

Now there are two possible approaches.

Either the application copies all the code of the component and adjusts the scanning algorithm in its local copy. Yes, I can hear you cringing at the mere thought of branching the code in this way. Then there are two versions to maintain.

The other solution is not much better though, because that involves implementing a feature in the library that only makes sense in the context of that single application. An extra set of branches at every extension point, and keeping fingers crossed that nothing was overlooked and no bad interactions with pre-existing code were introduced.

The Solution

I already gave it away above really. Writing the library with a foundation in Dependency Injection will give applications the ability to use a whole component whilst replacing just a single part of it with its own implementation.

Discussions on why Dependency Injection is a good idea tend to focus on test-ability, or auto-wiring of the dependencies, or component lifetime management.

In reality, the killer feature is the ability to allow a code library to define the common-scenario assembly of code parts into reusable components, whilst allowing individual applications to replace any one or more constituent code parts with its own as needed.

With DI in-place it becomes possible to enforce a strict rule that unless a code variation or component or code part is actively used by more than a single application it will not be let into the library. When the first application uses some code it will be private to the application. Only when a second use actually eventuates does it get considered for lifting into the library.

I plan to make that the new hard-and-fast rule as DI gets introduced into the code base of the library.

I’ll do a more in-depth post about DI and my opinions on its correct use sometime soon.