On Thu, Feb 19, 2009 at 8:20 AM, Stephen E. [email protected] wrote:
Itâs a test on a key called :login. The conversation with the
customer has no bearing on that; the customerâs never asked about the
errors data structure.
The code in the examples are for developers. The docstrings are for
customers. In this very specific case, the matcher doesnât support the
specific error message, but if it did, the example would be:
describe User do
context âwith punctuation in the loginâ do
it âraises an error saying Login canât have punctuationâ do
user = User.generate(:login => âmy.login!nameâ)
model.should have(1).error_on(:login).with(âcanât have
punctuationâ)
end
end
end
Even without that ability, this would be fairly expressive to both
customer and developer:
describe User do
context âwith punctuation in the loginâ do
it âraises an error saying Login canât have punctuationâ do
user = User.generate(:login => âmy.login!nameâ)
model.should have(1).error_on(:login)
end
end
end
I do check for error messages making it to the user, but not in my
model specs. Those get checked in my request specs. (Or my Cucumber
features, whichever Iâm doing that day.) So again, itâs covered; just
not twice.
This is where this all gets tricky.
TDD (remember? thatâs where this all started) says you donât write any
subject code without a failing unit test. This is not about the end
result - itâs about a process. What youâre talking about here is the
end result: post-code testing.
If youâre true to the process, then youâd have material in both
places. The cost of this is something that looks like duplication, but
itâs not really, because at the high level weâre specifying the
behaviour of the system, and at the low level weâre specifying the
behaviour of a single object - fulfilling its role in that system.
The cost of not doing this is different in rails than it is in home
grown systems. In home grown systems, since we are in charge of
defining what objects have what responsibilities, the cost of only
specâing from 10k feet is more time tracking down bugs. In rails, this
is somewhat mitigated by the conventions weâve established of keeping
types of behaviour (like error message generation) in commonly
accepted locations. If a merb request spec or cucumber scenario fails
on an error message, we can be pretty certain the source is a model
object.
But even that is subject to the level of complexity of the model. If a
view is dealing with a complex object graph, then there are multiple
potential sources for the failure, in which case there is some benefit
to having things specified at the object level.
But my validation specs do tend to be closely tied to AR methods like
valid?(), which, as your example suggests, is impeding my ability to
choose a different ORM lib. Time for some re-thinking!
To be fair, the only reason the tests I quoted work when I switched to
Datamapper is because DM coincidentally (or not) uses the same
âvalid?â method that AR does. Eventually you do have to hit your API.
I just like to hit it at the highest level that proves the behavior I
care about.
Agreed in general. Just keep in mind that behaviour exists at more
than one level. At the object level, behaviour == responsibility. If
Iâm a controller and my responsibility is to take a message from you,
re-package it and hand it off to the appropriate model, then that is
my behaviour.
Cheers,
David