JUnit seems to have come under fire recently, with articles such as this, and posts like this one from Geoff and this one from Cedric. The Artima article complains that JUnit’s reporting features are underdeveloped, and Cedrid & Geoff want to see success messages as well as failure ones.
JUnit is a ‘unit testing’ tool. Its supposed to be run many times a day. Its output is supposed to be transient. All the tests are supposed to pass all the time. Red bar / green bar. Pass or fail. If all the tests are passing it should remain silent. If a test fails, print a useful message. If you have a suite of thousands of tests, how easy is it to find 1 failure message if its mired inside 999 inane ‘test passed, foo does equal foo’ printouts?
The same goes for reporting. It is actually possible to get detailed reports from JUnit as XML, which can be processed by an ANT task to produce nice looking web pages, but if its being used correctly, all the tests would always be at 100%, as you don’t commit if any of the tests are failing, right?
Bolting on extra features blurs the line between functional testing and unit testing, and I for one am happy for JUnit to remain clearly focussed on doing one job very well, which it does admirably.
Side note: The Artima article is called ‘Why we refactored JUnit’. Following neatly on from my earlier post, what they did was neither big-R or little-R. What they actually did was write a brand new, JUnit testcase compatible tool from the ground up. Refactoring is defined as ‘improving the design of existing code’. In the strictest sense the only people who can refactor JUnit are the JUnit committers and contributers, and the result would still have been JUnit, with all the same behaviour, but with a cleaner internal structure.