In my younger days, before I knew any better, many projects I worked on compiled and published their software manually.
cc and then copy these bits over there and then zip that directory and post it to there.
Eventually, we figured out we could write little scripts to automate all the tedious bits and make it less fragile
and more repeatable.
One day, I discovered the discipline of daily builds and tools like make and my life got a whole lot better. Make gave us, in Elizabeth's handy phrase "a place to put things".
Make was kind of fiddly though and it would stop working for strange reasons and then someone invented Ant to eliminate some of make's eccentricities. Ant is much easier to work with than make but it does the same kind of thing. You define targets in terms of tasks and targets depend on other targets. The abstraction is a little higher than using shell scripts but most people still find a way to make ant scripts as brittle as perl scripts and as entangled and hard-to-read as shell scripts.
In the jargon of the interaction-designers, you can think about how a user interacts with system at three levels:
The activities are essential but the tasks can vary dramatically depending on the implementation (I can walk into a real bank; I can use my cellphone to order travelers checks; I can cash a check at Bob's Bail Bonds 'n' Bait Shop). Most strikingly, you can understand more of the essence of the requirement from the goal plus three activities than you can from the seven tasks.
Without the discipline of identifying the users' goals and the essential activities, systems get designed at the task level and the activities are just kind of assumed. In my waterfall days, I worked with one developer who wrote a functional spec thusly:
The customer wants a pie chart.
The technical design was:
There will be a pie chart.
and the detailed design said:
See technical design.
An interaction-designer might have spent a little time understanding what the customer really wanted to achieve and discovered that a pie chart was not required because the customer just wanted to know what proportion of diaper purchasers also buy beer.
It takes discipline to design at an appropriate level of abstraction and the right tools and the right vocabulary can help. Interaction-designers are trained to think about the goals and essential activities and they get quite irate when you naively ask them to design your dialogs for you.
I just started using Maven and, although it does the same kind of thing as Ant, it does it at a much higher level of abstraction. Instead of telling Maven to do this and then do that you tell it where your project is and what kind of project it is and it says Oh, I know how to build projects like that and happily builds, tests and deploys your software for you.
Furthermore, if you follow Maven's conventions for where to put things, you don't have to configure it at all. It turns out that most Java projects have the same kinds of things (source folders, unit tests, resources, libraries, jars, dependencies, repositories) and, if you put them where Maven expects them to be, it will perform all the boring tasks for you as well as lots of exciting tasks that you didn't previously know were possible.
So, to recap, the history of build systems is a trend towards higher abstraction. Each step along the way gave us new names for common elements and ideas. With shell scripts, you could do anything you needed to but you had to re-invent everything from scratch each time. Make gave us targets, dependencies and commands. Ant is a better make but didn't really move us up the ladder of abstraction very much.
Finally, Maven gave us a veritable cornucopia of abstractions. If your project is structured like every other project (and why wouldn't it be?) you can skip all the tedium of configuring classpaths and versioning libraries and focus your creative energy on building great software.
Once upon a time, most projects tested their software manually. One day, a few pioneers figured out that writing code to test software is just like writing any other code and automated software testing was born.
For a long time, the state of the art was to write a little main() method until, eventually, Kent and Erich named a few elements (fixtures, settings, assertions, setup, teardown) that seemed to be common from test to test and gave us, in JUnit, a place to put them.
It was a simple idea and, to this day, people still ask why JUnit is so special since it does so little. The answer is that, once you have some common elements and a place to put them, all kinds of beautiful things can happen. Tools appear and ways of reporting and tracking results multiply - just as they did with make and ant - and, when I look at your tests, I can immediately figure out what's what because your tests look a lot like my tests.
Fast-forward a few years and things haven't changed much since JUnit appeared. There are a lot more JUnit-alikes and a lot of libraries to make JUnit more powerful but the level of abstraction has not budged. We still have fixtures and setup and teardown and assertions and, even though Fit makes testing accessible to a larger community and the syntax is a little different, we still think about tests the same way as we did 10 years ago. Most tests are still at the implementation/task level.
And, while I applaud the efforts of the BDD people and frameworks like RSpec and JBehave, I can't help thinking that they are just using different words for the same things when what we need is new things.
So what will the next generation testing tool look like?
JBehave with it's Given, when, then shows us a tiny hint but I want more.
The concept of a user is useful and our users' existence in our tests should consist of so much more than some disembodied trivia in text boxes. The users' goals, desires, abilities and preferences determine how they interact with our applications. Interaction-designers use personas as a vehicle to understand which activities are necessary and how to implement them. Can't we find some way to make use of these personas in our tests?
Interactions are kind of important too but, more often than not, you have to reverse-engineer the user/system interactions from the forest of clicks, selects and pushes that make up a typical test.
I want to write little examples of how the user should interact with the application like this...
- Alex is a developer
- Karl is a developer
- MyFoundation is a portal application
- Karl is registered with MyFoundation
- Alex is registered with MyFoundation
- 'technology.foo' is a project
- Alex is a project administrator
- Karl is a developer
- Karl wants to be a committer
- Alex wants him to become a committer
Alex nominates Karl as a committer
- Alex must be logged in
- He enters Karl's details (name, email) and the project's mailing list
- He provides the reason for nomination: 'Karl is a nice man and dresses well'
- He confirms the nomination
- Karl should receive an email confirming his nomination
...rather than write tests at the task level...
(Stolen and adapted shamelessly from Ward's quite wonderful Process Explorer)
Now, the very best of us can already write tests at the activity level - just as a few people can write decent build scripts in make or Ant - but there is something about the current crop of test frameworks that makes people want to write tests that check boxes and click buttons.
If we can find the right abstractions to communicate the intent of an example, we might be able to finally break free of the perception of functional tests as brittle, hard-to-understand, write-only artifacts. Even better, we might find a way to layer new tools on top of these abstractions so that, if I want to write my examples in plain text and you want to drag boxes around on a screen and she wants to use the UML, we can each use the form that speaks most clearly to us.
If we peer a little further into the mists of the future we might see a tool that can take one little example of nominating a committer and suggest lots of tests that vary small details automatically (What if Alex nominates Susan who doesn't have access to email? What if John does the nominating and John refuses to use a mouse?). Perhaps the examples might even survive the port from Struts to Ruby-on-Rails?
I want more names for my common things. I want to deal in goals and activities not checkboxes and buttons. I want to give the system a few simple bits of information and have it tell me something I didn't know. I want to show my examples to everyone in the project community and have them lift up their understanding rather than drown it in permutations and edge cases and "what happens if the user types in Kanjii?".
In short, I dream of the Maven of functional testing.
Posted by Kevin Lawrence at October 13, 2007 04:37 PM
TrackBack URL for this entry:
I think you need to take the example two steps further. Let's say that the system is smart enough to ask "What if Alex nominates Susan who doesn't have access to email?" Someone's going to have to answer that question, probably. Two people, actually: some person (call him a "product director") is going to have to say what the system ought to do in that case. Some other person (call her a programmer') is going to have to make the system do that thing.
How does maven-for-testing help them?
Posted by: Brian Marick on October 15, 2007 02:58 PM
I tried to explore how the example could be converted to active code. It looks pretty easy to make many test cases out of the same script, varying personna.
Sample Script (if we define an actor as a Nominator or a Nominee with attributes and actions):
Nominator.Name = Karl
Nominator.MustBeLoggedIn (can be implemented through the GUI or directly to the underlying code)
Nominator.EnterNominationDetails(Name, Email…) can easily connect to a data source for systematic or random test generation
Nominator.Reason = 'Karl is a nice man and dresses well'
Posted by: Pierre on October 15, 2007 08:30 PM
Amazing post. I remember asking a question to Grady Booch on the importance of abstraction in software development and where will we say that we are done with abstractions, in one of the Rational User Conference. One of the highlight of his answer was, if you find your existing process inefficient, if there is certain value in abstracting someone or the other will certainly abstract it.
Though on looking at it, it might seem difficult to think of abstraction in automated testing. But it is not impossible, may be in future we will have tools in which we will say test checkout and it will understand that there is some shopping cart, there are some items and when we checkout it is expected that amount shown after checkout will be equal to the addition of individual items. Till we reach there.. repeat same operations for every shopping cart.
Posted by: TestingGeek on October 16, 2007 06:55 AM
Great ideas, and a great follow up to the Agile Alliance Functional Testing Tool Vision Workshop.
I think you are sketching some next generation of system design here, which is radically different from current systems.
I think the biggest challenge is in the how the system under test (SUT) can be implemented to allow this kind of abstractions in goals and activities, to enable better tests and your wish of letting the system tell you something you didn't know.
The SUT could maybe have a more "intelligent" interface that is more stateful and that can accept your Actors, Context and Motivation, and be set in a specific state which then could be acted upon (in line with what was a common "aha!" from the workshop).
What should the interface of such a system look like?
Maybe some given-when-then interface. Also, AI comes into mind here.. :)
Posted by: Christian Schwarz on October 17, 2007 07:30 AM
I got feedback from Jeff that
a) most people don't understand Maven well to appreciate the analogy and
b) I took the analogy too far by wishing for a 'Maven for Testing'.
Both fine points, that I will address now. Most people (and I was one of them until 2 weeks ago) assume that Maven is just a 'Better Ant'. But it actually represents a dramatic shift in how to think about builds. The whole 'Do this, then that' paradigm has just gone away. They made it vanish by choosing a very different level of abstraction that resulted in a different paradigm for thinking about build systems.
My wish is for a similar shift in paradigm in testing (not strictly a Maven-for-testing).
There are certain common ideas in build systems that Ant does not model (source path, deployment, dependencies between projects, library versions) and it is up to the user to create those ideas from low-level primitives (variables, paths and filesets).
Similarly, in today's functional testing tools there is no concept of 'the user of the system' or even 'the system under test' except where we, the testers, create it from low-level primitives like variables and classes (as Pierre demonstrated).
A lot of the follow up discussion at http://tech.groups.yahoo.com/group/aa-ftt/ has been around, 'how can we make FIT or Selenium a bit better'. I think there is an opportunity to make a greater leap than that.
My ideas are necessarily fuzzy right now (I don't know what the new primitives are yet), but I hope to clarify them as I work through a real example which - oddly enough - pertains to support for Maven in JUnit Factory.
Watch this space!
Posted by: Kevin Lawrence on October 19, 2007 09:27 AM
It isn't there yet but from what I saw at a CITCON session you should take a look at Concordion when it is released later this month.
Posted by: Jeffrey Fredrick on October 20, 2007 11:00 PM