Tim Pease
2/15/2007 8:01:00 PM
On 2/15/07, James Edward Gray II <james@grayproductions.net> wrote:
> On Feb 15, 2007, at 1:45 PM, Daniel Berger wrote:
>
> > On Feb 15, 12:32 pm, Alex Young <a...@blackkettle.org> wrote:
> >> Daniel Berger wrote:
> >>> Hi all,
> >>
> >>> What's the general approach folks use for skipping tests?
> >>> Sometimes I
> >>> have some tests that I want to skip based on platform (usually MS
> >>> Windows). I saw the 'flunk' method, but that's considered a failed
> >>> test. I'm looking for something that doesn't treat it as success or
> >>> failure.
> >>
> >> If you were to factor the platform-dependent tests out into their own
> >> module which you can conditionally include into the test case, I
> >> think
> >> you'd get what you were after.
> >
> > It's not a bad idea, but that still wouldn't explicitly indicate to a
> > user that tests had been skipped - they would merely see fewer tests
> > run. Plus, it's more work and I'm lazy. :)
>
> I think it's a much better design though.
>
> For an example of my concerns, what happens if your proposed skip()
> is called after a few assertions are run in a test?
>
That would be a design error, and an exception should be raised.
I like the idea of having a skip method that prints to the screen. It
is a way to let the user know that something is not being tested for
one reason or another.
We have a unit test framework for our embedded software code that has
a skip method. We use it all the time when a test can't be run either
because hardware is not available or it's running on the wrong
platform. The caveat is that the skip method has to be the first
assertion in the test -- otherwise an exception is raised.
Future addition to Test::Unit ??
TwP