Ep. 3 – Throw Away Your Unit Tests

In Episode 3, Allen Hurst and Mike Abney are joined by Brett Bim, Mike Deck, and Mike Doberenz to discuss unit testing. Alternate titles for this episode were “Tests? We don’t need no stinking tests!” and “A Fistful of Mikes.” Episode topics include:

  • Testing doesn’t give you what you think it does.
  • Throw away your unit tests!
  • Behavior-Driven Development (BDD)
  • Tradeoffs in automating regression testing.

Links:

4 thoughts on “Ep. 3 – Throw Away Your Unit Tests

  1. Great podcast. One interesting way to look at automated tests is, rather than categorizing by scope (unit vs integration vs system/regression), categorizing by speed. As a big fan of behavior driven development, I’m less concerned about what the scope of my tests are…they assert some behavior that I want to exist…I don’t care where it lives or how it manifests itself. What I really care about is how quickly I can get the feedback that those tests provide. Scope only enters the equation when I need to guard against brittle tests (those that fail even through the underlying behavior they assert still works).

    Personally, while I still occasionally use the terms “unit tests” and “integration tests”, what I really mean is “fast tests” and “slow tests” (where ‘slow’ is like 100ms or more). That distinction has a lot more value to me.

    1. Good call Ben. The fast and slow distinction is probably one of the most helpful. I have found myself in a project that had a lot of slow (100ms? try 15s!) integration tests. We have been steadily increasing our coverage through fast tests, and are now on the cusp of separating them out so that we have a “fast” continuous integration run that is our primary feedback and a “slow” CI that would run less often (but probably more than nightly) as a second line of defense.

      Having said that, I think that there is also value in grouping tests (at least logically) into groups based on who came up with the script. This is similar to what we often do by separating them by scope, but subtly and importantly different. For example, when designing, the developer might come up with a very small script to test a specific “unit’s” capabilities/behavior. So that is the “developer test” group. Then, the developer and customer might collaborate on clarifying some specific behavior at a similar or slightly larger scope. That would be a “customer specified” group. Note that this would be different than the group specified almost entirely by the customer which is commonly called the “acceptance” tests. The idea is that as the groups become more “customer-involved”, the tests raise in priority or importance. They should also be less brittle. Tying this back to the podcast, I believe that what Deck and Doberenz are saying is that the “developer test” group is what could be generally thrown away.

      What do you think?

  2. Great podcast. I really like the paint by numbers analogy, Brett. Regardless of the countless problems with automated testing the feedback is still so much faster than manually exercising the application.

  3. This podcast episode has been really inspiring. There are so many details that could be elaborated to improve the efficiency of testing that I’m sure I’m going to hear the episode again to take some notes to work on.

    Keep going on that level!

Leave a Reply to Ben Rady Cancel reply

Your email address will not be published. Required fields are marked *