On Tue, Nov 25, 2008 at 12:52 AM, Ben M. [email protected] wrote:
make your acceptance tests pass, that make you think your acceptance
actually write enough features to do this you’ll end up doing much
Here is how I look at the two sets of tests…
Features at the application level (acceptance tests) instill more confidence
CONFIDENCE!
That and, as Kent Beck describes today, responsible software, are why
we do testing at all.
in me about the correctness of the system’s behavior. Object level code
examples (unit tests) instill more confidence in me about the design of the
system.
With acceptance tests passing we have no guarantee about the state of the
design. Remember, TDD/BDD naturally produces easy to test objects and by
skipping object level examples you run the risk of creating dependent laden,
highly coupled objects that are hard to test. (Just think, you can make all
of your features, for a web app, pass by writing the app in PHP4 with no
objects at all .)
Which is not an inherently bad deal, if that’s your comfort zone, and
if that’s the comfort zone of everybody on your team.
Someone at the start of this thread had wondered what people had learned
when they went through this process of balancing FIT tests with unit tests.
I can speak to this a bit. Maybe more than a bit.
When I was working with .NET FitNesse and NUnit, we had very high
levels of coverage in NUnit. Early on one project I told Micah M.
(who co-created FitNesse with Bob Martin) that I was concerned about
the duplication between our FitNesse tests and NUnit tests and
questioned the value of keeping it.
Micah pointed out reasons that made absolute 100% perfect sense in the
context of the project we were working on. The customers were
encouraged to own the FitNesse tests. They were stored on a file
system, backed up in zip files, while the NUnit tests were stored in
subversion with the code. The FitNesse fixtures were stored with the
application code, distant from the FitNesse tests.
In order to foster confidence in the code amongst the developers,
having a high level of coverage in NUnit made sense, in spite of the
duplication with some of the FitNesse tests.
That duplication, by the way, was only in terms of method calls at the
highest levels of the system. When a FitNesse test made an API call,
that message went all the way to the database and back.
When an NUnit test made the same call, that message typically got no
further than the object in the test, using stubs and mocks to keep it
isolated.
Now fast forward to our current discussion about Cucumber and RSpec.
As things stand today, we tend to store .feature files right in the
app alongside the step_definitions and the application code.
The implications here are different from having a completely decoupled
acceptance testing system. I’m not saying that abandoning RSpec or
Test::Unit or whatever is the right thing to do. But I certainly feel
less concerned about removing granular code examples, especially on
rails/merb controllers and views, when I’ve got excellent coverage of
them from Cucumber with Webrat. Thus far I have seen a case where I
couldn’t quickly understand a failure in a view or controller based on
the feedback I get from Cucumber with Webrat.
But this is mostly because that combination of tools does a very good
job of pointing me to the right place. This is not always the case
with high level examples. If you’re considering relaxing a requirement
for granular examples, you should really consider each case separately
and include the level of granularity of feedback you’re going to get
from your toolset when you make that decision.
Now this is how I see things.
For anybody who is brand new to all this, my feeling is that whatever
pain there is from duplication between the two levels of examples and
having to change granular examples to refactor is eclipsed by the pain
of debugging from high level examples.
Also, as I alluded to earlier, every team is different. If you are
working solo, the implications of taking risks by working
predominantly at higher levels is different from when you are on a
team. The point of testing is not to follow a specific process. The
point is to instill confidence so you can continue to work without
migraines, and deliver quality software.
Cheers,
David