Avoiding the Temptation of Record and Playback Acceptance Tests
I’m sold hook, line and sinker on the value of automated testing. Unit tests, integration tests, acceptance tests – I love em all. I joined a new company a few months back (more on that in a later post) and I’ve had the opportunity to get back into some acceptance testing using Selenium/WebDriver.
The allure of record and playback
Selenium makes it trivial to create web tests using the open source IDE. Hit record, click around, then play it back to verify that your web application works as expected. It’s so easy, in fact, that you barely have to think about what you’re actually testing. To be fair, if you’re doing this then you’re doing more automated testing than a lot of teams have. And it may be enough for simple smoke tests. There are a number of reasons why I avoid this, though.
The problems with record and playback
Ok, I admit that it’s incredibly fast to get a test up and running with record and playback. Unfortunately, writing tests is only half the battle. Your tests evolve along with your codebase. Take a look at the selectors that the IDE uses to locate elements to click and interact with. If you were to come back to this test in a month would you understand it’s intent? If you add a new element – or worse a whole new feature – to your application can you easily find a seam in your tests to inject the new verification? It is unexpectedly difficult to maintain tests that were set up through a UI recorder.
An alternative approach
I don’t always test through the UI, but when I do, I prefer Selenium WebDriver. WebDriver is an open source API for writing browser based tests. In my current role I’m using the Python driver (along with some some awesome other tools that I’ll get into soon), but there is support for a handful of other languages. Using the driver API, we can write tests with all of the power of your programming language of choice. We can perform intelligent element extractions and assertions. Tests become readable and maintainable.
We now have to think about what we’re testing, which is just as valuable as all of the technical benefits. If we have well written stories and specifications we can turn those specifications into acceptance test scenarios.
Imagine a story in the e-commerce space where the customer receives an automatic discount if they put at least $200 of products into their cart. This would be nearly impossible to understand as a record and playback test. But we can easily make this scenario crystal clear when writing the tests. Not only that, but we can clearly illustrate the edge case expectations, such as when the cart contains exactly $200 of products. Heck, we can use a variable and have a single place to make a change when the product owner decides that they want to bump the amount to $250.
Well layered tests will set you free
Using the WebDriver API is a huge leap forward over record and playback. But if we just have a long list of element selections, invocations and assertions, we’re not that much better off than we were. Your tests should be treated with the same respect as your production code. That means putting proper abstractions in place, naming things clearly, staying DRY and SOLID.
One of the most useful abstractions when testing your web application is the Page Object. The idea is to keep a clean separation of tests and HTML/page structure. This ends up making tests even more readable and maintainable by providing an API for your pages. No more hunting through test files when you make changes to your page’s DOM – everything is in a single place.
A well stocked tool belt
You can using WebDriver from your normal unit testing framework and it will work brilliantly. But there are many incredible supporting tools for acceptance testing and behavior driven development for all major languages. In Python, I’m using Behave and loving it – .NET teams can use SpecFlow or StoryQ, in Java there’s JBehave.
Behave uses the Gherkin natural language format. This allows us to describe the desired behavior of the application in a format that is easy to read and write by both business and technical folks. We then provide implementations for each of the steps in each scenario, using the API provided by our page objects to drive the tests and get values used in our assertions. Running the tests produces output that reads just like our stories which we can use to show our stakeholders exactly how the application behaves.
Try doing that with record and playback!