"Why not MockEverthing" or why use fixtures for all tests?

Hello fellow RSpec users.

Before you all start warming up your flame throwers please let me
explain my Subject line.

I’ve been working over 4 months on a large Rails project with a few
other developers. Test coverage was spotty at best, though they
were RSpec tests. One of the other developers and I had started
adding more tests, mostly controller tests using the methodology given
at rspec.info for writing controller tests isolated from the model and
view layers using stubs and mocks.

Recently a new project manager was put in place and he brought in
another developer. This developer promptly started to re-write all
the existing controller (and later view) tests, removing all mocks
and stubs and replacing them with code to use fixtures. (He also
deletes many comments he finds in the code if he thinks they’re
obvious, but that’s another story…). His commit messages include
comments like “Stop mocking around” and “More fixes due to our test
mockery”.

When challenged on why he’s re-writing these tests instead of writing
new, missing tests (even tests using fixtures) he replied with this e-
mail with the subject “Why not MockEverything”. (Note that I do use
fixtures for model tests but follow the RSpec documentation and use
mocks/stubs for controller and view tests for isolation.) In the
email this developer mentions tests broken by the addition of
conditional to the view. This conditional used a model method not
previously used in the view, and the addition of one stub was
sufficient to fix the view test in question.

Here is his email to me, less his signature as I don’t want to make
this personal. I’d like to see what the RSpec user community has to
say in response to his comments, below:

— Why not MockEverything —
David I’ve removed the mocks on purpuse. Not that I have sufficient
ills with them to meddle without a need. We committed simple
template fixes adding a conditional and there, yet the tests broke.

Now this was to be expected, the tests were constructed by
exhaustively mocking out all methods called on the object. Add a
simple conditional be it harmless as it is now means another method
needs to be mocked out.

The MockEverything approach is not healthy, judicious use is
preferable. One thing is to write a short sample in a blog and another
is to have a working app with lots of tests. From all my apps that I
have worked on this has by far the lowest coverage both in profile and
in test value. There is no discussion we are all committed to tests.

To better see what constitutes good practice I recommend you to
inspect the source of RadiantCMS a beautiful and well engineered app
recently rewrote to use rspec instead of Test::Unit:

http://dev.radiantcms.org/browser/trunk/radiant/spec

Observe how the code is restrained in mocking, real objects are
preferred wherever possible. Incidentally they don’t use fixtures
rather factories to create real objects. Now the factory part is a
separate issue I’ll don’t discuss here, as it has its own
disadvantages especially a project with many models …

With real objects your test will not be brittle, and their value will
be kept even after adjusting the templates or doing other small
refactorings.
Contrary to common misconception test speed will not be affected
either. Especially for view tests where you don’t even have to save to
the db upon preparing the test rig.

Beside Radiant there where efforts to rspec Typo and Mephisto (both
noted rails blog engines). Still these were half harted conversions so
my arguments based on them would not have the same weight. RadiantCMS
is enough - they are used on ruby-lang.org and have converted 100% to
rspec … plus they also have good coverage showing that they actually
believe in tests. So please look into Radiant, you’ll find it most
helpful I think.
— END OF EMAIL—

Thank you,

David S.

Hi David

On 18 Mar 2008, at 08:15, David S. wrote:

Here is his email to me, less his signature as I don’t want to make
this personal. I’d like to see what the RSpec user community has to
say in response to his comments, below:

[snip email]

For me, it basically boils down to: use whichever is appropriate for
what you’re testing, as long as it provides good coverage and makes
the tests easy to write. If the tests are hard to write, and you’re
mocking/stubbing all over the place, it’s a smell that your
implementation is probably not loosely coupled enough - perhaps you
need to refactor.

Mocks are a very useful testing construct, but using them religiously
everywhere is as bad as not using them religiously, IMHO.

Have you read Mocks aren’t Stubs[1]? It’s a very good treatise on the
advantages and disadvantages of using mocks in testing. Pat M.
also wrote an article on this subject recently that I found very
helpful[2].

Thanks
Chris

[1] Mocks Aren't Stubs
[2]
eli.st

On Tue, Mar 18, 2008 at 5:04 AM, Chris Parsons
[email protected] wrote:

Have you read Mocks aren’t Stubs[1]? It’s a very good treatise on the
advantages and disadvantages of using mocks in testing. Pat M.
also wrote an article on this subject recently that I found very
helpful[2].

I agree wholeheartedly! I’d also recommend Mock Roles, Not Objects[3]

When I’m spec’ing views, for example, I tend to follow this path:

  1. start with an expectation, watch it fail
  2. get it to pass with hard-coded values in the view
  3. replace the hard-coded view code with the code I wish I had[4]
  4. mock/stub whatever I need at the time in the example to get it to
    pass

This last step varies depending on the state of the system. If the
model class I want doesn’t exist, I use a mock object. If it does
exist, I’ll use stub_model[5] to provide isolation from the db, but
allow the real model code to be invoked. If I used a mock to begin
with, I may switch it later. The point is that while I’m in the view I
want to stay focused on the view.

FWIW.

Cheers,
David

[3] jmock.org/oopsla2004.pdf
[4] Writing “the code I wish I had” is something I learned from James
Grenning at Object Mentor.
[5] stub_model is a new method in git - will be released with 1.1.4.
It creates a real model instance and prevents database access a la
unit_record (but on an instance by instance basis). This allows you to
reap the benefit of isolation from the db while reducing the false
positives that concern so many with using mocks in Ruby.

On Mar 18, 2008, at 6:36 AM, Alan Larkin wrote:

more
many comments he finds in the code if he thinks they’re obvious,
email
response to his comments, below:
mocking out all methods called on the object. Add a simple
this has by far the lowest coverage both in profile and in test
preferred wherever possible. Incidentally they don’t use fixtures
Contrary to common misconception test speed will not be affected
rspec … plus they also have good coverage showing that they

Let me first say that I am a very very recent comer to RSpec so my
are just
tested by asserting that the instance receives a destroy message. I
personally
think thats inadequate. It makes assumptions about implementation
and doesnt
guard against unwanted side effects.

I should be able to delete a record any way I please (delete, destroy,
connection().execute(…)) and the test should pass.

However, each of those have different behaviour, and that is what I am
spec’ing when I do model.should_receive(:destroy)

James D.

David S. wrote:

using stubs and mocks.
new, missing tests (even tests using fixtures) he replied with this
response to his comments, below:
The MockEverything approach is not healthy, judicious use is preferable.

upon preparing the test rig.
Thank you,

David S.



rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users

I was going to start a thread about mocks and fixtures this morning too,
so Ill
use this one.

Let me first say that I am a very very recent comer to RSpec so my
opinions dont
carry much weight, but …

I have come to the tentative conclusion that mocking is fine in view
specs where
you are really only interested that certain assigns have been made and
that they
respond to certain messages. In fact mocks are ideal. Possibly in models
too.
However in controller specs I think you find examples where fixtures are
just
the best way to go. In these cases, from what I have seen, mocking leads
to
brittle and frankly worthless tests (a half-arsed test is worse than no
test at
all, right?).

The case that crystalised that opinion for me was a spec for a destroy
action.
In spec::rails scaffold (and in many examples I see online) this action
is
tested by asserting that the instance receives a destroy message. I
personally
think thats inadequate. It makes assumptions about implementation and
doesnt
guard against unwanted side effects.

I should be able to delete a record any way I please (delete, destroy,
connection().execute(…)) and the test should pass. This is BDD after
all. We
should be testing the behaviour of the action, not the implementation,
and the
desired behaviour is that the corresponding record and only that
record
are
deleted … no one cares how its achieved. The only correct way to test
this
IMHO is to assert that TheModel.find(:all) before the action is equal to
TheModel.find(:all) after the action less the record in question. For
this I see
fixtures as the way to go.

Just my opinion. Commence flaming.

Hello,

On Tue, Mar 18, 2008 at 4:15 AM, David S. [email protected]
wrote:

and mocks.
missing tests (even tests using fixtures) he replied with this e-mail with

thing is to write a short sample in a blog and another is to have a working
Observe how the code is restrained in mocking, real objects are preferred

David S.
There are a few bad assumptions in your colleague’s response, so to
set the record straight:

  • test coverage and tests which use the interaction-based test
    approach are not mutually exclusive
  • you can have crappy tests which take the state-based approach and
    crappy tests which use a interaction-based approach
  • interaction-based testing is not merely limited to contrived
    examples on people’s blogs, it is a real practice which adds value on
    lots of “real-world” projects
  • using factories to generate required objects in tests has several
    pros over the use of fixtures, and very very very few cons

State-based testing and interaction-based testing both have their
place. There are number of reasons why they are both useful, but I’m
going to pick two: object decomposition (and coordinators) and
integration testing. Others have mentioned the value of writing tests
with the interface you want so I’m going to leave that out.

As an application grows in features and complexity (business logic of
course) good developers will decompose the problem into a number of
simple objects. Some of these objects are responsible for doing the
work and others are responsible for coordinating other objects to do
the work. Objects which are responsible for coordinating are great
candidates for using interaction-based testing, because you are
concerned in the interaction, not the “state”.

If you don’t have integration tests then using an interaction-based
testing approach is not worth it because you need something that is
going to test the real objects working with real objects. In Rails you
can write integration tests as Rail’s
ActionController::IntegrationTests, Rail’s functional tests, RSpec
stories, or RSpec controller tests w/view isolation turned off.

IMO, one false benefit of only using a state-based approach when
writing a full-fledged application is that every object is essentially
an integration test at some level. You are always testing everything
with everything that it touches. This can lead to having one failure
in one model make several other model tests fail, and it can make
several controller tests failing (as well as any other object which
touches the model that is failing). I see this has a big negative
because it makes it more difficult to pinpoint the issue. People will
end up tracking it down, but it can be time consuming and frustrating.

Now on the flip side people will complain that they renamed a model
method and re-ran all of their tests and everything passed, but when
running the application a bug exists. Doh, we forgot to update the
controller that relied on calling that model method. It is normal to
say/think, “well that should have failed because the method doesn’t
exist on the model”. (It sounds like David C. may have
something in trunk to help with this.) The main problem here though is
that an integration test didn’t fail exposing that you weren’t done
with your change.

Thinking back to coordinating objects, my controllers don’t contain
business logic in them because they are application layer classes,
they aren’t apart of the domain of my software. They are only used by
the application to allow the software to fulfill the requirements of
my customer. Controllers are coordinators, not DOERS. They ask other
objects to fulfill a business requirement for them like moving stocks
from one portfolio to the another. So I used interaction-based testing
here to ensure that my controller is finding a stock, finding a
portfolio and asking a portfolio manager to move the stock to the
designed portfolio. I don’t need to have those things written or even
fully implemented to ensure my controller works as I expect. I should
be able to see that my controller does what it should be doing, even
if the pieces it will use to do the work in the application aren’t
finished. Now if those aren’t implemented I should have an integration
test which fails showing me that the feature for moving stocks from
one portfolio to another is not completed, but that isn’t what I’m
testing in my controller.

Also after my controller works as expected I can go make sure the
PortfolioManager works as expected, and then I can go down and make
sure the Stock model does what I expect. When these objects are
working correctly individual I run my integration tests to ensure they
work well together.

Another drawback of only using state-based testing is that you always
have to develop bottom up. You have to start with the low level
components and work your way out. I used to write code this way. I
think I have progressed beyond that, and now I write things in a
Acceptance Test Driven Development style. I start by writing an
integration test from the user’s perspective proving that the feature
doesn’t work, and then I move to the view, and then to the controller,
then to any manager/factory/presenter/service objects that are
required, and then down to any domain level objects (models and
non-models alike). You can’t do this approach with state-based testing
only. There is a lot of value that can be gained by developing
software this way.

In short: Interaction-based testing allows you to ensure that an
object is doing what you expect, without the underlying implementation
having to exist yet at all or in full. It is great for application
layer objects which typically only coordinate domain layer objects
where the correct interaction is what is important. It also helps you
develop interfaces, and it can scream loudly when you have an object
doing way too much.

  • “Blaming “brittleness” of tests upon interaction-based testing is a
    red herring. Both interaction-based tests and state-based tests become
    brittle if they make assertions upon implementation details and overly
    constrain the interfaces between modules.” - Nat Pryce

  • http://nat.truemesh.com/archives/000342.html - a wonderful read on
    interaction-based vs state-based testing

examples on people’s blogs, it is a real practice which adds value on
As an application grows in features and complexity (business logic of
can write integration tests as Rail’s
because it makes it more difficult to pinpoint the issue. People will
with your change.
designed portfolio. I don’t need to have those things written or even
sure the Stock model does what I expect. When these objects are
then to any manager/factory/presenter/service objects that are
develop interfaces, and it can scream loudly when you have an object

Zach D.
http://www.continuousthinking.com


rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users

A lot of what you say makes me wish I was more experienced in this
department :slight_smile: I am very new to this! A part of me wishes I had the
knowledge to write in the order of story → view spec → controller
spec → model spec. However most of the time (I emphasize MOST) I
don’t have the foresight to do that. The problem I’m trying to solve
is almost always too complicated for me to know right away where to
really start (my latest client has some crazy ideas). Maybe the
problem is that I make things too complicated for myself :slight_smile: However I
have been a developer (just not using RSpec) for a very long time so I
know fairly well how to recognize when things need to be complicated
and when they don’t. This means .should_receive is often out of the
question because I have no idea what the model should receive!

My primary concern when writing my specs that are to cover complicated
features is that I do NOT want false confidence. If I write a spec,
and it passes, I want that to mean it works in my app. When the spec
goes green, my next step is to go hit Refresh in my browser. If it
doesn’t work in my browser, then in my opinion, my spec is crap. It’s
telling me things work when they don’t.

I hear the concern being voiced that if you break one thing and 15
specs pass then you’re not mocking enough. Well since this is BDD,
after all, then we should be working closely to the current spec we’re
trying to make pass. I change code, and 15 specs break, well I have a
good idea of what code got broken because it’s the top-most file in my
editor! I hit save, Autotest screamed at me, I’m going to go hit undo
now.

Sometimes I make noob decisions and give model A a certain
responsibility when it should have been done by model B. I get it to
work in the short term, my spec’s pass, but later I need to add
another feature and realize that old responsibility needs to be moved
from A to B. Now I have a red spec and X green specs. I move that
responsibility, X specs are green, with still the same 1 red spec. I
implement the new feature, X+1 green specs. I refresh in my browser,
sure enough, it all works. I didn’t have to go change all of my stubs
and should_recieve’s everywhere that just got moved. There’s no need
to, because my specs cover the true business logic behavior, and not
the model-specific “behavior”.

While I do certainly believe the ability to spread everything out well
enough so that 1 error = 1 broken spec comes from great wisdom and
experience, I certainly don’t have it, and I don’t want to encourage
others like me to try to strive for that because I don’t know how to
teach them from my own example. What I do know, is that I use a lot
of real models, and I don’t spend any time fixing specs that are
broken by working code. I did that on my last project and it, in my
opinion, wasn’t worth it. I’d change a variable assignment with
a .save to a .update_attribute and then I had a broken spec.

My fear is that I’ll write green specs, pat myself on the back, and
then my company loses money because the site has bugs that my specs
show green for because I don’t have enough integration tests or
whatever. But I don’t want to have to double my tests for the same
amount of coverage. I should have 1 spec for 1 feature and when that
feature is working, that spec is green, plain and simple. I admit I
may be ignorant to a lot of the power behind RSpec, but I like this
level of simplicity and straightforwardness.

Glenn F.

On Mar 19, 2008, at 8:42 AM, Glenn F. wrote:

My primary concern when writing my specs that are to cover complicated
features is that I do NOT want false confidence. If I write a spec,
and it passes, I want that to mean it works in my app. When the spec
goes green, my next step is to go hit Refresh in my browser. If it
doesn’t work in my browser, then in my opinion, my spec is crap. It’s
telling me things work when they don’t.

This paragraph caught my eye, especially in my specific issue of mocks/
stubs versus fixtures in our own tests.

In my view tests I use mock_model with a hash of valid attributes.
This allows a mock with automatic stubs for these valid attributes:

valid_user_attributes = {
‘name’ => ‘user name’,
‘userid’ => ‘userid’
}

before (:each) do
@mock_user = mock_model(User, valid_user_attributes)
[…]
end

A sample test to verify that one of these attributes is properly
displayed looks like:

it “should display the user name in the contact box” do
do_render
response.should have_tag(‘div#contact’, /.#{@mock_user.name}./)
end

The dev that wants all fixtures does this by loading a fixture record:

before (:each) do
user = User.find( users(:david).id )
[…]
end

His tests written to check that particular attributes were properly
displayed in the view look like:

it “should display the user name in Contact Pod” do
do_render
response.should have_tag(‘div#contact’, /^#{users(:david).name}/)
end

This developer added some new code to the view in question, displaying
a new table value and complained when my mock/stub version of the test
failed because there was no stub to display that new attribute.
However, though his test passed, he had failed to add the “name”
attribute to his fixtures. This meant that users(:david).name was
returning an empty string (’’) and his test was just checking that an
empty string was in the view output, which would pass EVERY time!

I demonstrated this by putting an ‘X’ after each of the names in the
regex (like /.#{@mockuser.name}X./) to force a failure. When I ran
both versions of the test I saw the following output for my test:

</.User NameX./> expected but was <"User Name\n 123 Any St…
The fixtures version of the test with the ‘X’ returned this error:
</^X/> expected but was <"User Name\n \n \n…
Note that nothing is showing prior to the X, because name wasn’t in
the fixture. My point is that was much easier to forget to add the
new attributes to the fixtures in another file. When adding the new
attribute to the valid_user_attributes you’d have to create it as an
empty string (‘name’ => ‘’) which would be much less likely to happen.
With the fixtures version pulling data from the fixtures files it’s
much less obvious when you have bad data in the fixture. As far as
maintenance, the mock/stub version only required one line to be added
to the valid_user_attributes hash. The fixtures version requires that
new attribute to be added to every fixture that will be using that
attribute and it’s easy to accidently leave it out of some fixture
records.
The result, at least in this particular case, is that it was much
easier to end up with a number of tests returning false “passes” using
the fixtures method.
I’m not a 100% mock/stub proponent, but I’m very much against 100%
fixtures and like using mocks and stubs for view tests where what you
are testing is that the proper data is displayed in the correct
containers in the view.
David

On Wed, Mar 19, 2008 at 10:42 AM, Glenn F. [email protected] wrote:

crappy tests which use a interaction-based approach
with the interface you want so I’m going to leave that out.
testing approach is not worth it because you need something that is
several controller tests failing (as well as any other object which
something in trunk to help with this.) The main problem here though is
here to ensure that my controller is finding a stock, finding a
Also after my controller works as expected I can go make sure the
integration test from the user’s perspective proving that the feature
layer objects which typically only coordinate domain layer objects
interaction-based vs state-based testing
A lot of what you say makes me wish I was more experienced in this
question because I have no idea what the model should receive!

My primary concern when writing my specs that are to cover complicated
features is that I do NOT want false confidence. If I write a spec,
and it passes, I want that to mean it works in my app. When the spec
goes green, my next step is to go hit Refresh in my browser. If it
doesn’t work in my browser, then in my opinion, my spec is crap. It’s
telling me things work when they don’t.

Sounds like you’re thinking of specs as application-level
specifications. They can be, but that is not the intent. They are
intended to be examples of how individual objects work in isolation.
So I disagree that if the spec passes and the application fails that
the spec is crap. It’s just isolated.

I hear the concern being voiced that if you break one thing and 15
specs pass then you’re not mocking enough. Well since this is BDD,
after all, then we should be working closely to the current spec we’re
trying to make pass. I change code, and 15 specs break, well I have a
good idea of what code got broken because it’s the top-most file in my
editor! I hit save, Autotest screamed at me, I’m going to go hit undo
now.

That’s great that you can back out, but you now have a small problem
to solve that has a bigger short-term impact than you want. Ideally,
you’d be able to solve the small problems, one at a time, until the
big problem is solved. The way you know you’re solving small problems
is the object-level examples pass. The way you know you’re solving big
problems is that the application-level examples (stories, or even
manual-in-browser testing)

the model-specific “behavior”.
Again - this is a matter of granularity. The whole point of having
granular examples is to enable you to make changes to the system
easily via refactoring. Sometimes refactoring requires moving examples
around along with the implementation code. This is refactoring 101
stuff, and an accepted part of the refactoring process in all of my
experience prior to working in Ruby. It’s only people in the Ruby
community that I see expressing this concern. I think it’s because the
refactoring tools for Java and C# are superior, so they automate a lot
of the granular steps you need to take when refactoring manually.

The problem with decreasing the granularity is that it makes fault
isolation more difficult. It means less work right now for lots more
work down the road.

My fear is that I’ll write green specs, pat myself on the back, and
then my company loses money because the site has bugs that my specs
show green for because I don’t have enough integration tests or
whatever.

You throw that out like integration testing is an afterthought and
specs are king. In my view, they are equal partners.

But I don’t want to have to double my tests for the same
amount of coverage.

You just suggested that you fear that your green specs don’t provide
enough coverage, but that the addition of integration testing would
provide sufficient additional coverage. How is that “the same amount
of coverage”?

I should have 1 spec for 1 feature and when that
feature is working, that spec is green, plain and simple. I admit I
may be ignorant to a lot of the power behind RSpec, but I like this
level of simplicity and straightforwardness.

This is not really about RSpec. It is about an approach to using tests
to drive out the behaviour of your application, and then help you to
maintain it by simply living on as regression tests.

I appreciate the motivation to do less work now. Especially with
Rails, which makes it so damned easy to prototype something that the
extra effort of automated testing seems painful. All I can say is that
I’ve consistently had a much easier time of maintaining an application
over time when I’ve been disciplined about both high and low level
testing. If you find that you can live with only one level of testing,
then more power to you.

FWIW,
David

On Mar 19, 2008, at 1:03 PM, David C. wrote:

approach are not mutually exclusive
place. There are number of reasons why they are both useful, but I’m
the work. Objects which are responsible for coordinating are great

end up tracking it down, but it can be time consuming and
that an integration test didn’t fail exposing that you weren’t done
from one portfolio to the another. So I used interaction-based
integration

then to any manager/factory/presenter/service objects that are
layer objects which typically only coordinate domain layer objects
overly
[email protected]
problem is that I make things too complicated for myself :slight_smile:
and it passes, I want that to mean it works in my app. When the spec
goes green, my next step is to go hit Refresh in my browser. If it
doesn’t work in my browser, then in my opinion, my spec is crap.
It’s
telling me things work when they don’t.

Sounds like you’re thinking of specs as application-level
specifications. They can be, but that is not the intent. They are
intended to be examples of how individual objects work in isolation.
So I disagree that if the spec passes and the application fails that
the spec is crap. It’s just isolated.

I see your point here, very true. If my usage of RSpec improves with
better integration testing, I’m sure I’ll be able to use them more
appropriately in this manner.

now.

That’s great that you can back out, but you now have a small problem
to solve that has a bigger short-term impact than you want. Ideally,
you’d be able to solve the small problems, one at a time, until the
big problem is solved. The way you know you’re solving small problems
is the object-level examples pass. The way you know you’re solving big
problems is that the application-level examples (stories, or even
manual-in-browser testing)

I can see your point here, but I don’t believe it disagrees with my
idea so much. My approach still involves small specs passing building
up to larger functionality in the end. The limitation is just that I
start from the bottom level and work my way up. I just don’t have the
option of working in the other direction, which from a design
perspective can certainly be limiting like you’re pointing out. I
think it’s my own fault that more than anything else that I can’t ever
seem to plan in the other direction :slight_smile: It’s typically not until I
find a solution on the model level that I understand what will be
coming back on the top level. Experience will probably change this so
that I can see it more like you do.

to, because my specs cover the true business logic behavior, and not
of the granular steps you need to take when refactoring manually.

The problem with decreasing the granularity is that it makes fault
isolation more difficult. It means less work right now for lots more
work down the road.

Perhaps my example simplified my problem too much. The chore that I
was referring to unfortunately wasn’t about refactoring. I wish it
had been that easy! Instead it was really the behavior that got
transfered, but the logic and code was quite different. The goal was
to ensure the end result was still the same, that way I would know I
hadn’t broken anything but still had the new structure that I needed.
Without my specs covering the resulting state I would have had no
guidance to help me here.

My fear is that I’ll write green specs, pat myself on the back, and
then my company loses money because the site has bugs that my specs
show green for because I don’t have enough integration tests or
whatever.

You throw that out like integration testing is an afterthought and
specs are king. In my view, they are equal partners.

Chalk that one up to ignorance I suppose, I don’t have much good
integration testing and I don’t even know much about good ways to do
it. Even with integration testing, however, I’d still have my
original spec broken and in need of repair when I don’t believe it
should need fixing.

But I don’t want to have to double my tests for the same
amount of coverage.

You just suggested that you fear that your green specs don’t provide
enough coverage, but that the addition of integration testing would
provide sufficient additional coverage. How is that “the same amount
of coverage”?

What I meant was that if I wrote them with less real models and more
mocks, I would feel I had less coverage, and that the only way to
match coverage without real models would be to up the number of tests
(aka lots more integration testing). If I can just change my approach
to writing the spec in a way that’s more easy to write anyway, why
should I feel required to write additional tests on top of that to re-
verify that things work?

Rails, which makes it so damned easy to prototype something that the
extra effort of automated testing seems painful. All I can say is that
I’ve consistently had a much easier time of maintaining an application
over time when I’ve been disciplined about both high and low level
testing. If you find that you can live with only one level of testing,
then more power to you.

I don’t mean to make it sound so much like a short-term laziness
thing. Maybe it comes out that way! I’m certainly more encouraged
now to better learn about the high-level testing options. Thank you
for the time you spent offering your feedback. I hope you can also
find value in the perspectives of those who come in struggling for
direction. Even as hard as I try to read and research everything
(while still managing to find development time in there) it’s often
difficult to see what all is available/possible/best.

Thank you!
Glenn

On Mar 19, 2008, at 3:21 PM, David C. wrote:

community that I see expressing this concern. I think it’s because
was referring to unfortunately wasn’t about refactoring. I wish it
code yet improves its internal structure."

Sounds like what you are describing is exactly that, no?

Yes, it is, but since you spoke mostly of the simple kind of
refactoring that Eclipse might do for Java I figured that’s all you
thought I was talking about. I was speaking about redesigning the
stuff under the hood. There’s no ctrl+ function in Eclipse that would
do that one for me!

Glenn

On Wed, Mar 19, 2008 at 2:10 PM, Glenn F. [email protected] wrote:

of the granular steps you need to take when refactoring manually.
hadn’t broken anything but still had the new structure that I needed.
Without my specs covering the resulting state I would have had no
guidance to help me here.

From Fowler: “Refactoring is the process of changing a software system
in such a way that it does not alter the external behaviour of the
code yet improves its internal structure.”

Sounds like what you are describing is exactly that, no?

David