Unit testing advice

Forgive me if this is a stupid question.

(Actually, I know it’s a stupid question – they’re the only ones
worth asking…)

  • How do you unit test a method who’s job is to interface with the
    outside world? For example, a method that outputs to a file? *

I’ve done a lot of coding, but in a dinosaur language – automated
unit tests are completely new to me. I understand how to use
unit/test; but it seems to me that that’s only half the story. I need
some suggestions on how to design my code in such a way as it can be
tested. Suggestions, anyone?

Shadowfirebird

On Fri, Aug 1, 2008 at 5:31 PM, Shadowfirebird
[email protected] wrote:

unit/test; but it seems to me that that’s only half the story. I need
some suggestions on how to design my code in such a way as it can be
tested. Suggestions, anyone?

Usually, mock objects are best for this. There exist some special
purpose ones, but it is relatively straightforward to put one together
using something like Mocha or Flexmock, both available on RubyForge.
If you are using RSpec, support for mock objects is built in.

-greg

Many thanks. Clearly I’m going to have to go back to research mode on
this one.

I’ve found this lovely bit of example code for mocha, though – see
below. If I understand you correctly, you seem to be saying that you
don’t test the output routine; you use a mock to fake it so that you
can test the rest of the code?

That would imply that you farm out the difficult bits of the output
routine into other methods in the same way that you would with
functional programming – ideally, leaving the output routine as
something that is so simple it doesn’t need testing?

class Enterprise
def initialize(dilithium); @dilithium = dilithium; end

def go(warp_factor); warp_factor.times {
@dilithium.nuke(:anti_matter) }; end
end

require ‘test/unit’
require ‘rubygems’
require ‘mocha’

class EnterpriseTest < Test::Unit::TestCase

def test_should_boldly_go
dilithium = mock()
dilithium.expects(:nuke).with(:anti_matter).at_least_once #
auto-verified at end of test
enterprise = Enterprise.new(dilithium)
enterprise.go(2)
end

end

On Fri, Aug 1, 2008 at 10:37 PM, Gregory B.
[email protected] wrote:

unit tests are completely new to me. I understand how to use


Killer Ruby PDF Generation named after a magnificent sea creature:
GitHub - practicingruby/prawn: THIS REPOSITORY HAS MOVED TO: | Non-tech stuff at:
http://metametta.blogspot.com


Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it’s there when I’m holding you / There when I’m sleeping too /
There when there’s nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea

On Fri, Aug 1, 2008 at 6:03 PM, Shadowfirebird
[email protected] wrote:

Many thanks. Clearly I’m going to have to go back to research mode on
this one.

I’ve found this lovely bit of example code for mocha, though – see
below. If I understand you correctly, you seem to be saying that you
don’t test the output routine; you use a mock to fake it so that you
can test the rest of the code?

Mocks do check to make sure that certain calls are actually made with
constraints on the way they are called and how they respond.
So if you have a method like load_text_file(name) that appends a .txt
to the end of the filename and then reads the file, like so:

def load_text_file(name)
File.read(“#{name}.txt”)
end

You’d want to do a mock like:

Set up the expectation

File.expects(:read).with(“foo.txt”)
load_text_file “foo”

The point here is that this tests your code without unnecessarily
verifying that File.read() works.

Of course, be sure that your mocks reflect reality when you design
them, as it’s possible to build nonsense mocks that lead you astray.
However, as soon as you really try to use your code, you’d notice that
and be able to fix it in your tests…

That would imply that you farm out the difficult bits of the output
routine into other methods in the same way that you would with
functional programming – ideally, leaving the output routine as
something that is so simple it doesn’t need testing?

Essentially the idea behind mocking is that you replace an external
resource with an object that handles the sorts of messages that form
the interface between your external resource and your code. This
should behave in the same way you’d expect your real resource to
behave given the way that you are using it, so that you can test
things like exception handling and also test your code that wraps and
invokes these resources.

From this point of view, the assumption is that you can rely on thing
like File handles, Database connections, or other external resources
to work as expected, so you don’t need to actually test them directly.
What you do need to test is the interaction between your code and
these resources, and for this purpose, a suitable mock object that
verifies these things works great.

And yes, testing this way does encourage you to make your wrappers of
external resources clean and easy to work with, which is a side
benefit.

-greg

On 1 Aug 2008, at 23:03, Shadowfirebird wrote:

Many thanks. Clearly I’m going to have to go back to research mode on
this one.

I’ve found this lovely bit of example code for mocha, though – see
below. If I understand you correctly, you seem to be saying that you
don’t test the output routine; you use a mock to fake it so that you
can test the rest of the code?

The following article is well worth reading:

Fred

On Sat, Aug 2, 2008 at 12:03 AM, Shadowfirebird
[email protected] wrote:

def go(warp_factor); warp_factor.times { @dilithium.nuke(:anti_matter) }; end
I thought that antimatter consumption was exponential to warp speed!
R.

Thanks everyone.

Now I need to go meditate to get my head around the idea of designing
methods to be testable. It’s quite a shift.

Shadowfirebird.

On Sat, Aug 2, 2008 at 2:24 AM, Phlip [email protected] wrote:

test that A did its thing. T considers B’s behavior just a side-effect.

}; end
end

Very nice. And note it obeys my list, by mocking both hardware and
space-time distortions.


Phlip


Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it’s there when I’m holding you / There when I’m sleeping too /
There when there’s nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea

Shadowfirebird wrote:

I’ve found this lovely bit of example code for mocha, though – see
below. If I understand you correctly, you seem to be saying that you
don’t test the output routine; you use a mock to fake it so that you
can test the rest of the code?

Consider a test T that calls A, which calls B.

Sometimes, a B is cheap to assemble, so T can assemble B, activate A,
and test
that A did its thing. T considers B’s behavior just a side-effect.

Most of the time, B should be real. And its state should be static (such
as a
Rails “fixture”, with static data.) B should not have runaway
dependencies. The
test T should be easy to set-up.

You should mock B if it’s too expensive to set up. For example, if B
reads the
system clock, and if T would prefer the date is 2008 July 31, the test T
should
not wait an infinite amount of time, until the cosmos oscillates and
2008 July
31 occurs again. Tests should run as fast as possible, to avoid any
hesitation
running them.

You should mock B, so it behaves as if the date is correct. Or you
should mock
(in Ruby) Time.now, so all Ruby methods that use the date will read the
mocked date.

Other examples of things too expensive to directly test:

  • live users
  • random numbers
  • hardware - networks, robots, tape drives, the clock, etc
  • system errors

If your B object is not on the list, you should not mock it. Unit tests
work
best when they cross-test everything. The only thing better than a test
that
fails because A broke is many tests that all accurately fail because B
broke. If
your B is too expensive to assemble, you should refactor it, so it
bypasses the
behavior that T and A did not care about.

class Enterprise
def initialize(dilithium); @dilithium = dilithium; end

def go(warp_factor); warp_factor.times { @dilithium.nuke(:anti_matter) }; end
end

Very nice. And note it obeys my list, by mocking both hardware and
space-time
distortions.

Hi –

On Sat, 2 Aug 2008, Phlip wrote:

test that A did its thing. T considers B’s behavior just a side-effect.

Most of the time, B should be real. And its state should be static (such as a
Rails “fixture”, with static data.) B should not have runaway dependencies.
The test T should be easy to set-up.

You should mock B if it’s too expensive to set up.

Mocking is also good for pinpointing exactly what you want to test and
what you don’t, even if not mocking wouldn’t be that expensive in
resources. For example, there’s the classic Rails create method, which
goes like this:

if new record is saved successfully
go do something
else
do something else
end

In this case, you can certainly do a real saving of the object. But if
you want to test only the conditional logic, and make sure that the
controller takes the right branch given a record whose answer to “Did
you save?” is always “Yes” or always “No”, then you can use a mock
object.

Other examples of things too expensive to directly test:

  • live users
  • random numbers
  • hardware - networks, robots, tape drives, the clock, etc
  • system errors

If your B object is not on the list, you should not mock it.

I wouldn’t narrow it down that strictly. It can depend on the purpose
of the test, as well as the profile of the thing you’re mocking.

David

David A. Black wrote:

  • live users
  • random numbers
  • hardware - networks, robots, tape drives, the clock, etc
  • system errors

If your B object is not on the list, you should not mock it.

I wouldn’t narrow it down that strictly. It can depend on the purpose
of the test, as well as the profile of the thing you’re mocking.

You can also avoid mocking the clock by setting a time to 2.minutes.ago,
for
example.

(And “hardware” covers “profile”. We don’t care if B takes a trillion
clock
cycles, on a magic CPU that can run them all instantly.)

However, some teams go mock-crazy (even those subjected to high-end
consultants), and mock everything for no reason. Don’t do that!

Hi –

On Sat, 2 Aug 2008, Phlip wrote:

of the test, as well as the profile of the thing you’re mocking.

You can also avoid mocking the clock by setting a time to 2.minutes.ago, for
example.

I’ve had the experience, as have others I imagine, of putting a
future date in a fixture and then, six months later, wondering why my
test wasn’t passing… so I’m all for “ago” and friends :slight_smile:

(And “hardware” covers “profile”. We don’t care if B takes a trillion clock
cycles, on a magic CPU that can run them all instantly.)

However, some teams go mock-crazy (even those subjected to high-end
consultants), and mock everything for no reason. Don’t do that!

It’s all about doing it for a reason; I’m just adding to the list.

David

Shadowfirebird wrote:

Now I need to go meditate to get my head around the idea of designing
methods to be testable. It’s quite a shift.

It’s easy if you write the tests first. Get them to fail for the right
reason,
then write code to pass them.

Hi –

On Sun, 3 Aug 2008, Shadowfirebird wrote:

I certainly “get” the idea that it’s better to write the tests first.
Unfortunately my dinosaur brain needed to write some code to prove
that my class model was workable…!

I often have to bootstrap myself into an application by getting
something running before I can get my brain into test mode. Certainly
with something like Rails, there’s a lot to do before any tests are
written, since you can’t write unit tests before you know what your
models are (and generating the model so conveniently writes the test
file for you :slight_smile:

One thing to remember is that TDD is about development, and that not
all instances of entering code on a keyboard are development. And
we’ve all, I believe, learned an absolute ton from exploration and
experimentation that wasn’t part of the process of ongoing application
development. It’s important not to feel like you’re being
unprofessional or sloppy if you happen to want to try out some code in
a file, or in irb, and you don’t write a test for it.

In fact, one question that has come to intrigue me recently is the
question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no – and if that’s the
case, it means that there is no evidence for the position that it’s
always, automatically bad to write code without a test. That’s not
meant to be a counterargument to the idea that testing is important
and a good practice. I do wonder whether things get a bit too
doctrinaire at times, though.

code is supposed to do? I really like that idea…
I don’t think you have to decide it’s one thing or the other. If you
can write tests after you write code, and if you feel certain that
the code is as good as it would have been if you had written the tests
first (not all first, but iteratively), that’s fine. It’s that second
clause that’s the issue, though. A lot of people find that writing
tests first helps them get in the “zone” of thinking about exactly
what their code is supposed to do, in a very focused way. So it
becomes part of the process of writing the program, not just a way to
put protective armour and/or documentation around it later.

David

Shadowfirebird wrote:

I certainly “get” the idea that it’s better to write the tests first.
Unfortunately my dinosaur brain needed to write some code to prove
that my class model was workable…!

What I’m seeing quite clearly now, is that the next best thing is
writing the tests while pretending that you haven’t written the code
yet.

Right: It’s like a little conversation between the tests and code, line
by line.

I suppose you are all going to say that the tests should show what the
code is supposed to do? I really like that idea…

The absolute ideal situation:

  • write a little test (maybe add one line to an existing test case)
  • run the test, and its diagnostic tells you what to do next
  • do it
  • run the test and pass it
  • integrate!
  • write a little bit more test
  • it fails, and tells you what to do next
  • do it
  • the test fails again, telling you the next thing to do!
  • do it
  • the tests pass
  • integrate!

The best way to do TDD is make the tests tell you each thing to do next.

But if you can’t easily get back to a green-bar, you should revert
back to the
last integration. You should always be able to predict the result of the
next
test run.

David A. Black wrote:

In fact, one question that has come to intrigue me recently is the
question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no – and if that’s the
case, it means that there is no evidence for the position that it’s
always, automatically bad to write code without a test.

Nobody holds such a position. But when your boss is breathing down the
back of
your neck waiting for you to fix a bug (one that someone wrote by
avoiding
tests), and you are “wasting time” trying to write a test case, instead
of just
fixing it, your boss might need a little help understanding how tests
could have
avoided the situation…

I certainly “get” the idea that it’s better to write the tests first.
Unfortunately my dinosaur brain needed to write some code to prove
that my class model was workable…!

What I’m seeing quite clearly now, is that the next best thing is
writing the tests while pretending that you haven’t written the code
yet.

Because the tests should be based on what the code is supposed to
do, not what it does. Look at the code when you want to know what it
does; but when you want to know what it’s supposed to do, look inside
your head.

I suppose you are all going to say that the tests should show what the
code is supposed to do? I really like that idea…

On Sat, Aug 2, 2008 at 2:31 PM, David A. Black [email protected]
wrote:

test wasn’t passing… so I’m all for “ago” and friends :slight_smile:
David


Rails training from David A. Black and Ruby Power and Light:

  • Advancing With Rails August 18-21 Edison, NJ
  • Co-taught by D.A. Black and Erik Kastner
    See http://www.rubypal.com for details and updates!


Me, I imagine places that I have never seen / The colored lights in
fountains, blue and green / And I imagine places that I will never go
/ Behind these clouds that hang here dark and low
But it’s there when I’m holding you / There when I’m sleeping too /
There when there’s nothing left of me / Hanging out behind the
burned-out factories / Out of reach but leading me / Into the
beautiful sea

David A. Black wrote:

with something like Rails, there’s a lot to do before any tests are
written, since you can’t write unit tests before you know what your
models are (and generating the model so conveniently writes the test
file for you :slight_smile:

Isn’t the idea of TDD and BDD that you discover what classes and method
are needed by writing the tests first? So there should be no model in
place prior to writing the test; the initial failure of the test is what
drive the creation of the model.

question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no – and if that’s the
case, it means that there is no evidence for the position that it’s
always, automatically bad to write code without a test.

Has anyone ever seen a Learn To Program or Learn Language Blub book that
did TDD? I doubt such a thing exists. Instead, people are shown code,
encouraged to write code, then (in so many words) told that what they
were shown and told is not the right way to code. On the other hand,
having a unit test for the 1-liner helloworld.rb seems massively goofy.

A more useful view is that TDD and company are for code you plan to
keep and maintain, and if you find supposedly transitory code lingering
a bit too long you need to consider re-writing via TDD or retro-fitting
tests. (And even that should be tempered by the size of the code in
question. )

That’s not
meant to be a counterargument to the idea that testing is important
and a good practice. I do wonder whether things get a bit too
doctrinaire at times, though.

What I’ve often heard is that folks (such as myself) will hack out a
running version of something as a means of exploratory coding; it’s like
doing sketches prior to starting The Big Fresco. Then (so some claim),
that code is chucked and actual TDD begins: tests, failure, code to make
the tests pass.

In reality I think most exploratory coders will salvage the nicer parts
of the code and tack on unit tests, then move forward with TDD. There
may be some bias at play, where the time invested in creating that
initial code colors how one determines its quality.

I suppose you are all going to say that the tests should show what the
put protective armour and/or documentation around it later.
That sounds about right. Testing is not the goal; robust, accurate,
maintainable code is the goal.


James B.

www.happycamperstudios.com - Wicked Cool Coding
www.jamesbritt.com - Playing with Better Toys
www.ruby-doc.org - Ruby Help & Documentation
www.rubystuff.com - The Ruby Store for Ruby Stuff

On Sun, 3 Aug 2008, Phlip wrote:

David A. Black wrote:

In fact, one question that has come to intrigue me recently is the
question of whether there are any active programmers who have
literally written code test-first from the time they first learned how
to program onward. I suspect the answer is no – and if that’s the
case, it means that there is no evidence for the position that it’s
always, automatically bad to write code without a test.

Nobody holds such a position.

I hope you’re right, but I wonder sometimes.

But when your boss is breathing down the back
of your neck waiting for you to fix a bug (one that someone wrote by avoiding
tests), and you are “wasting time” trying to write a test case, instead of
just fixing it, your boss might need a little help understanding how tests
could have avoided the situation…

I agree; in general, it’s important (and difficult, in some
situations) to see testing as part of the process and not something
grafted onto it or digressive from it.

David

James B. wrote:

Isn’t the idea of TDD and BDD that you discover what classes and method
are needed by writing the tests first? So there should be no model in
place prior to writing the test; the initial failure of the test is what
drive the creation of the model.

When “greenfield” coding, one way to design is to force an unpredictable
design
to emerge via TDD. That’s the high-end rationale for TDD. You can also
sketch a
design, then see if you can write the right tests to force it to emerge.

When writing new code that addresses some existing library or module,
you often
do this:

def test_learn_foo
foo = assemble_foo
result = foo.activate
p result
end

Now you noodle around inside foo.activate - essentially learning what it
can do.
Then you pin down your research with assertions. It’s all good.

Shadowfirebird said…

Thanks everyone.

Now I need to go meditate to get my head around the idea of designing
methods to be testable. It’s quite a shift.

It is, but by writing the tests first - and/or as you go - you’ll find
that a lot of the design simply falls out in the wash.

If you ever have the unpleasant experience of retro fitting tests to
code, then you’ll really get a handle on why tdd promotes usable and
maintainable code. That is, if the experience doesn’t drive you insane.