Phlip
7/15/2008 2:26:00 PM
David Mitchell wrote:
> The reason I need to do this is that we've got a small Watir-based DSL
> written to allow us to drive an app through code that looks sort of
> like:
> login("fred", "password")
> click_tab("Reports")
> click_drilldown("Asia")
> open_report("Some report title")
> ..
>
> It essentially lets us construct test cases in something like plain English.
Do your programmers run that after every edit?
> We built a few test cases using the DSL with a "normal"
> Test::Unit::TestCase approach, then showed them to our testers.
> Everyone was pretty excited about it; we can generate our own test
> data using the DSL, the testers can comprehend the DSL without having
> to dig into the nuts and bolts of the application itself, we can
> finally build a full regression test suite for an application that's
> basically a pig to drive using normal automation testing tools like
> QTP, the scripts we write using the DSL are easy to maintain over
> time, and everyone's happy.
Asking the question another way - do your developers write any tests?
> Once our testers got a look at that, they pointed out what should've
> been obvious all along: we can now get actual business users to write
> a lot of the test cases using the DSL, rather than using specialist
> testers. Rather than writing huge business requirements documents
> that have a habit of getting misinterpreted, we can get the business
> users to create what are essentially test cases using the DSL, and
> that gives the developers a reasonably unambiguous description of how
> things are supposed to work - we'll save a whole lot of time and money
> we're currently wasting translating between business-speak and
> developer-speak.
Awesome! Now, can your business side actually run the tests - such as thru a web
site with a "what if" interface?
> The only problem was the "scaffolding" code; apparently business users
> are incapable of writing/extending code that looks like
>
> class Testcases < Test::Unit::TestCase
> def test_1
> <<DSL stuff here>>
> end
> def test_2
> <<more DSL stuff here>>
> end
> ...
> end
>
> but they are capable of creating a bunch of test cases in individual
> files that contain nothing but the DSL commands. They'll use e.g.
> Notepad to create test cases in individual files that look like:
> login('fred', 'password')
> click_tab('Reports')
That sounds like Fitnesse's territory. It does the Notepad thing, but with a
real GUI around it. Your customer team writes the test criteria in a DSL, and
FIT acts as a test runner.
> Fine with me - I just work here...
>
> So now I've got a situation where we're going to have business users
> generating loads of test cases using our DSL (without any of that
> nasty complicated Test::Unit::TestCase stuff), saving them in flat
> files, and we need to run be able to run some unspecified number of
> test cases that will change over time. What I need to be able to do
> is something like:
>
> Dir.glob("app_test_cases/**/*.app).each do |test_script_file|
> <<grab the content of each file, build a new Test::Unit::TestCase
> wrapper around it and eval it>>
> end
Now the slight problem is you are using TestCase as your runner, when it's full
of features you don't need, and thin on features you actually do need. More below.
> That's no problem - I've got most of this working already; all I
> needed was the way to dynamically add new test cases, and you've now
> given me a way to do that. I'll have a play with it tomorrow.
I use define_method, in Rails, like this:
[all this controller's actions].each do
define_method test_one_action_#{action} do
# test one common thing
end
end
The first thing you need to look for is if your failures are humane, or if they
are a huge mass of developer-friendly diagnostics and stack traces. The great
thing about a DSL (per RSpec) is (reputedly!) that faults can lead with clear
English too: "The frob should have returned 42 but it returned 43".
--
Phlip