Monday, November 19, 2007

Automating tests vs. test-automation

Automating tests vs. test-automation

In the last couple of years the practice of testing has undergone more than superficial changes. We have turned our art into engineering, introduced process-models, come up with best-practices, and developed tools to support our daily work and make each test engineer more productive. Some tools target test execution. They aim to automate the repetitive steps that a tester would take to exercise functions through the user interface of a system in order to verify its functionality. I am sure you have all seen tools like Selenium, WebDriver, Eggplant or other proprietary solutions, and that you learned to love them.


On the downside, we observe problems when we employ these tools:

  • Scripting your manual tests this way takes far longer than just executing them manually.
  • The UI is one of the least stable interfaces of any system, so we can start automating quite late in the development phase.
  • Maintenance of the tests takes a significant amount of time.
  • Execution is slow, and sometimes cumbersome.
  • Tests become flaky.
  • Tests break for the wrong reasons.
Of course, we can argue that none of these problems is particularly bad, and the advantages of automation still outweigh the cost. This might well be true. We learned to accept some of these problems as 'the price of automation', whereas others are met by some common-sense workarounds:
  • It takes long to automate a test—Well, let's automate only tests that are important, and will be executed again and again in regression testing.
  • Execution might be slow, but it is still faster than manual testing.
  • Tests cannot break for the wrong reason—When they break we found a bug.
In the rest of this post I'd like to summarize some experiences I had when I tried to overcome these problems, not by working around them, but by eliminating their causes.

Most of these problems are rooted in the fact that we are just automating manual tests. By doing so we are not taking into account whether the added computational power, access to different interfaces, and faster execution speed should make us change the way we test systems.

Considering the fact that a system exposes different interfaces to the environment—e.g., the user-interface, an interface between front-end and back-end, an interface to a data-store, and interfaces to other systems—it is obvious that we need to look at each and every interface and test it. More than that we should not only take each interface into account but also avoid testing the functionality in too many different places.

Let me introduce the example of a store-administration system which allows you to add items to the store, see the current inventory, and remove items. One straightforward manual test case for adding an item would be to go to the 'Add' dialogue, enter a new item with quantity 1, and then go to the 'Display' dialogue to check that it is there. To automate this test case you would instrument exactly all the steps through the user-interface.

Probably most of the problems I listed above will apply. One way to avoid them in the first place would have been to figure out how this system looks inside.
  • Is there a database? If so, the verification should probably not be performed against the UI but against the database.
  • Do we need to interface with a supplier? If so, how should this interaction look?
  • Is the same functionality available via an API? If so, it should be tested through the API, and the UI should just be checked to interact with the API correctly.
This will probably yield a higher number of tests, some of them being much 'smaller' in their resource requirements and executing far faster than the full end-to-end tests. Applying these simple questions will allow us to:
  • write many more tests through the API, e.g., to cover many boundary conditions,
  • execute multiple threads of tests on the same machine, giving us a chance to spot race-conditions,
  • start earlier with testing the system, as we can test each interface when it becomes 'quasi-stable',
  • makes maintenance of tests and debugging easier, as the tests break closer to the source of the problem,
  • require fewer machine resources, and still execute in reasonable time.
I am not advocating the total absence of UI tests here. The user interface is just another interface, and so it deserves attention too. However I do think that we are currently focusing most of our testing-efforts on the UI. The common attitude, that the UI deserves most attention because it is what the user sees, is flawed. Even a perfect UI will not satisfy a user if the underlying functionality is corrupt.

Neither should we abandon our end-to-end tests. They are valuable and no system can be considered tested without them. Again, the question we need to ask ourselves is the ratio between full end-to-end tests and smaller integration tests.

Unfortunately, there is no free lunch. In order to change the style of test-automation we will also need to change our approach to testing. Successful test-automation needs to:
  • start early in the development cycle,
  • take the internal structure of the system into account,
  • have a feedback loop to developers to influence the system-design.
Some of these points require quite a change in the way we approach testing. They are only achievable if we work as a single team with our developers. It is crucial that there is an absolute free flow of information between the different roles in this team.

In previous projects we were able to achieve this by
  • removing any spatial separation between the test engineers and the development engineers. Sitting on the next desk is probably the best way to promote information exchange,
  • using the same tools and methods as the developers,
  • getting involved into daily stand-ups and design-discussions.
This helps not only in getting involved really early (there are projects where test development starts at the same time as development), but it is also a great way to give continuous feedback. Some of the items in the list call for very development-oriented test engineers, as it is easier for them to be recognized as a peer by the development teams.

To summarize, I figured out that a successful automation project needs:
  • to take the internal details and exposed interface of the system under test into account,
  • to have many fast tests for each interface (including the UI),
  • to verify the functionality at the lowest possible level,
  • to have a set of end-to-end tests,
  • to start at the same time as development,
  • to overcome traditional boundaries between development and testing (spatial, organizational and process boundaries), and
  • to use the same tools as the development team.

No comments:

Powered By Mushu

Powered By Mushu