Not quite absolutely nothin’, but I am starting to wonder.
More tests means more coverage means more green that changes to red at the slightest change. 100% test coverage means 0% freedom to change a line of code without changing a test as well.
At the start of the month we retired a truckload of tests at work, and the world didn’t end. We instated a rule that tests that are red for 3 consecutive days get terminated with extreme prejudice. Always red = no value.
But I have been thinking. What is the value of a green test? I mean, a test that is always green. Tests that have never been red are no more valuable than tests that do not go green. But it is so hard to let go; so tempting to think that a test surely inherently has value.
But do they really?
What makes a test worthwhile? What are tests for? How many?
Bug-free software is a white whale. Every test is a constraint to change. Like any other code, it isn’t free; keeping it is costly. So, I guess the answers are “just enough” and “giving us confidence where we need it” and “when it proves a real invariant truth”.
I just need to work out a practical applications of those holistic answers. But it definitely starts with “less is more”.
My mind has been steeped deep in the quagmire of automated testing for the past few days. If ever there were an area of software development where I wished for a silver bullet that I could just load and fire…
I’ve gone from a job where testing was predominantly a lightly automated activity, to a job where large swaths of the testing effort are automated to a high degree. I definitely prefer the latter in principle, if only there were a definitive book of recipes and best-practices to follow.
I enjoyed watching the Pluralsight video on Code Testability; it has the best explanation of what makes testing code hard that I’ve ever seen. The side-by-side examples in which he shows how using dependency injection increased what he calls the “sphere of influence” (or the directly testable surface) of the code was an A-Ha moment for its sheer clarity.
And another video that has gotten lost in the blur in my memory made a great point regarding keeping tests descriptive and linear. Unit Tests are easy to write because they do not branch or loop. They test one thing well at a time, and as a result feel more reliable; being able to read and trivially understand a test is a major strength.
And yet… it bothers me when a lot of the hands-on examples in videos still show how to test an “ArgumentException” type gets thrown. I blogged about exceptions what feels like an eternity ago (Exceptions – 3 specifically), and ArgumentExceptions should not be caught; they aren’t for code to deal with, they are for code to try and avoid at all cost. So why test them? Like… ever?
There are plenty of proto-guides on (unit-)testing best practices, but few if any get beyond the obvious: isolate your test cases, pick structured names, be wary of mocking, test one thing only. But that still doesn’t really give any guidance on what is worth testing. Writing tests is easy enough, writing the right tests, and no more than necessary, is the hard part.
Are there resources out there I haven’t found yet?