Quickcheck testing framework

Hey all,

I’m wondering if anyone has any experience with an automated test-case
generation tool like Quickcheck (for erlang/haskell). I’d be interested
in hearing any impressions, war stories, or dev workflows regarding a
tool like this. Talking off list to David C, he suggested that it might
be a complimentary tool to a TDD/BDD framework like rspec.

It appears as though there is a similar project out there for ruby named
rushcheck (http://rushcheck.rubyforge.org/). Doesn’t seem like it’s
been maintained in a few years, though, and I’m guessing no one is using
it. I wonder if it these automated test generation tools are more
appropriate in functional languages like Haskell and Erlang.

Scott

On May 16, 2010, at 12:54 PM, Scott T. wrote:

Hey all,

I’m wondering if anyone has any experience with an automated test-case generation tool like Quickcheck (for erlang/haskell). I’d be interested in hearing any impressions, war stories, or dev workflows regarding a tool like this. Talking off list to David C, he suggested that it might be a complimentary tool to a TDD/BDD framework like rspec.

My thinking here is that it could be useful to drive out an initial
implementation using TDD, and at the point we think we’ve got the
solution we want, add something quickcheck-like to try to poke holes in
it. I’d probably then add new examples if any cases I hadn’t considered
were revealed through this process.

It appears as though there is a similar project out there for ruby named rushcheck (http://rushcheck.rubyforge.org/).

It’s up on github too: GitHub - hayeah/rushcheck: Daisuke IKEGAMI's Ruby port of Quickcheck. Same guy has
this too: GitHub - hayeah/rantly: Ruby Imperative Random Data Generator and Quickcheck - random data generator -
looks like you could do stuff like:

Rantly.new.each(100) do
thing.method_that_accepts_a_string(string).should have_some_quality
end

This would cause 100 random strings to be generated and passed to
thing.method_that_accepts_a_string. Assuming the matcher verifies some
set of rules about the outcomes, you’ve basically got quick check.

The documentation is behind (the lib used to be called something else
and some method names have changed), but I think it wouldn’t take much
work to be able to adapt this for use in RSpec.

On May 16, 2010, at 8:13 PM, David C. wrote:

On May 16, 2010, at 12:54 PM, Scott T. wrote:

Hey all,

I’m wondering if anyone has any experience with an automated test-case generation tool like Quickcheck (for erlang/haskell). I’d be interested in hearing any impressions, war stories, or dev workflows regarding a tool like this. Talking off list to David C, he suggested that it might be a complimentary tool to a TDD/BDD framework like rspec.

My thinking here is that it could be useful to drive out an initial implementation using TDD, and at the point we think we’ve got the solution we want, add something quickcheck-like to try to poke holes in it. I’d probably then add new examples if any cases I hadn’t considered were revealed through this process.

Have you watched John Hughes’ presentation on the matter?

http://video.google.com/videoplay?docid=4655369445141008672#

It’s sort of interesting that he won’t do any TDD - he’ll let the
reduction process generate the “minimum” test case, and go from there
(that’s not explicitly stated in that video, although I’m pretty sure
I’ve heard him say it before).

If I had a tool like this, I’m guessing I’d probably have a workflow
like the following:

  1. use the random test case generator, and fix any issues that were
    obvious.
  2. If something wasn’t obvious, I’d go and write a test case for in a
    more traditional testing tool (rspec). I often use the debugger in
    conjunction with the spec runner, running the one test case with a
    debugger statement at the start of the test case.
  3. Any regressions would (obviously) happen in the traditional tool.

The big win with a tool like this is not testing boundary cases, it’s in
having the tool “write” the test cases for you. OTOH, I wonder if the
simplicity of the implementation would be sacrificed when taking this
approach.

Another drawback - I have no idea how such a tool would integrate with a
build server.

It appears as though there is a similar project out there for ruby named rushcheck (http://rushcheck.rubyforge.org/).

It’s up on github too: GitHub - hayeah/rushcheck: Daisuke IKEGAMI's Ruby port of Quickcheck. Same guy has this too: GitHub - hayeah/rantly: Ruby Imperative Random Data Generator and Quickcheck - random data generator - looks like you could do stuff like:

Rantly.new.each(100) do
thing.method_that_accepts_a_string(string).should have_some_quality
end

There’s a blog post about the library here, if anyone is interested:

http://www.metacircus.com/hacking/2009/04/10/look-ma-no-monads.html

I’ve been thinking about integrating a port of the ruby library faker
into scriptcheck, the javascript testing tool I’ve been working on:

http://github.com/Marak/Faker.js

This would cause 100 random strings to be generated and passed to thing.method_that_accepts_a_string. Assuming the matcher verifies some set of rules about the outcomes, you’ve basically got quick check.

Yeah, pretty much. One issue, though, is that you don’t want to hard
code the number of random generations. You’ll also want a convenient
way to run just one given test case easily (which rspec already has).

You’ll probably also want to separate these random generation tests from
the rest of your tests. Hitting a database 1000 times for one test is
going to be costly. Now that I’m thinking about it, it might make a ton
of sense in languages like erlang or haskell where everything is
functional because those languages lend themselves to parallelization
since there are no shared resources.

Regards,

Scott

On May 16, 2010, at 11:10 PM, Scott T. wrote:

Have you watched John Hughes’ presentation on the matter?

http://video.google.com/videoplay?docid=4655369445141008672#

I haven’t yet. I’ll give it a look-see later today.

It’s sort of interesting that he won’t do any TDD - he’ll let the reduction process generate the “minimum” test case, and go from there (that’s not explicitly stated in that video, although I’m pretty sure I’ve heard him say it before).

If I had a tool like this, I’m guessing I’d probably have a workflow like the following:

  1. use the random test case generator, and fix any issues that were obvious.
  2. If something wasn’t obvious, I’d go and write a test case for in a more traditional testing tool (rspec). I often use the debugger in conjunction with the spec runner, running the one test case with a debugger statement at the start of the test case.
  3. Any regressions would (obviously) happen in the traditional tool.

The big win with a tool like this is not testing boundary cases, it’s in having the tool “write” the test cases for you. OTOH, I wonder if the simplicity of the implementation would be sacrificed when taking this approach.

My guess is that it would.

Another drawback - I have no idea how such a tool would integrate with a build server.

What integration point would there need to be? It’s just Ruby.

http://www.metacircus.com/hacking/2009/04/10/look-ma-no-monads.html

I’ve been thinking about integrating a port of the ruby library faker into scriptcheck, the javascript testing tool I’ve been working on:

http://github.com/Marak/Faker.js
GitHub - smtlaissezfaire/scriptcheck: A quickcheck in javascript

This would cause 100 random strings to be generated and passed to thing.method_that_accepts_a_string. Assuming the matcher verifies some set of rules about the outcomes, you’ve basically got quick check.

Yeah, pretty much. One issue, though, is that you don’t want to hard code the number of random generations.

Why not? Wouldn’t it make sense to have smaller numbers in some cases
and larger ones in others?

You’ll also want a convenient way to run just one given test case easily (which rspec already has). You’ll probably also want to separate these random generation tests from the rest of your tests.

Exactly! This is what I had in mind when I said “at the point we think
we’ve got the solution we want, add something quickcheck-like to try to
poke holes in it.” The steps would be:

  1. Drive out minimal implementation with specs
  2. Write some quickcheck-ish tests in a separate location
  3. Run them
  4. If there are any failures, use them to evaluate and enhance the specs
    that I’d already written

This would really amplify the distinction between specs and tests. Plus,
the tests would be indirectly testing the specs as much as they are
testing the implementation. Of course, this is all theoretical. If we
could just use quickcheck and still get all the documentation and
implementation-driving benefits of TDD, I’d probably move in that
direction myself :slight_smile:

Hitting a database 1000 times for one test is going to be costly.

If we used the process I just outlined, we could run the specs using
autotest (ironic), and only run the tests on demand and on the CI
server.

On 16 May 2010, at 18:54, Scott T. wrote:

Hey all,

I’m wondering if anyone has any experience with an automated test-case generation tool like Quickcheck (for erlang/haskell). I’d be interested in hearing any impressions, war stories, or dev workflows regarding a tool like this. Talking off list to David C, he suggested that it might be a complimentary tool to a TDD/BDD framework like rspec.

It appears as though there is a similar project out there for ruby named rushcheck (http://rushcheck.rubyforge.org/). Doesn’t seem like it’s been maintained in a few years, though, and I’m guessing no one is using it. I wonder if it these automated test generation tools are more appropriate in functional languages like Haskell and Erlang.

Scott

A year ago I was at the SPA conference (which I’m at right now) in
London. The whole week ended up being very Haskell-themed, with Simon
Peyton-Jones doing a keynote, a couple of different tutorials and some
impromptu dojos at lunchtime.

Then on the last day a couple of French guys ran a fascinating session.
They had a simple CLI Java app which simulated an ATM machine. The first
part of the exercise was to build a model of the behaviour of the ATM
program. The model was based on naming the different states that the ATM
could be in (waiting for PIN, waiting for amount etc) and the possible
transitions between those states (enter incorrect PIN, enter amount
etc). We then used Haskell to build a representation of that model in
code. The idea (which we didn’t have time to get to in the session) was
then to connect the Haskell model to the real application and use
QuickCheck to spray test-cases at it, through the Haskell model.

One of the interesting things for me was how similar that model felt to
a set of Cucumber features. In Cucumber we express behaviour in terms of
scenarios where we put the application into a particular state,
interfere with it in some way, and then see what state it has
transitioned into. So when we write Gherkin features for our programs
we’re really building up a model of how we expect the application to
behave.

I wondered at the time whether there was any way to either actually
generate a Haskell wrapper model from Gherkin code (in order to use
QuickCheck) or somehow write a QuickCheck-like program in Ruby that
understood Gherkin.

Then I went back to work and forgot all about it.

On 17 May 2010, at 11:38, David C. [email protected] wrote:

case generation tool like Quickcheck (for erlang/haskell). I’d be
interested in hearing any impressions, war stories, or dev
workflows regarding a tool like this. Talking off list to David
C, he suggested that it might be a complimentary tool to a TDD/
BDD framework like rspec.

This is something I’ve been playing around with in Cucumber for quite
a while. My main thought is I want to make more use of dead CPU time.
When I’m sleeping I want my tests and system being exercised. What I’m
thinking is a tool in Cucumber a little like heckle. Mutating the
matches/inputs (which in cucumber represent regexp matches) and
examing the output.

A cucumber test can be seen as a black box with inputs we can prod at
and observe the output.

What I want out of this is a report which shows me failures and what
inputs where used.

In order to prevent a sprawl of failures it would be useful to derive
from the failures rules which describe a group of failing tests.

I.e with int between 1 and 100 test failed.

I think of this

This is a different usecase to Rspec but thought some of my thoughts
might be useful.