Definition of an Agile Testing Process plus Test Automation Agile Testing

Unit testing with NUnit, acceptance testing with FIT & Fitnesse, continuous integration, and exploratory manual testing.

Starting with Simplicity

As the team was new to XP(and to an extent still skeptical about it) we did not attempt to take on all aspects of the method at once. Certain things made sense from a business perspective and we concentrated on those first:

  • We knew that, although we had a reasonably good idea of the fundamentals of what the dashboard application needed to do, the detailed requirements would evolve over time.
  • We needed to demonstrate working software to our customer as quickly as possible.
  • Weneeded to do the simplest thing that could possibly work. Two-week iterations, with a focus on running tested software at the end of every iteration, ensured we followed the principle of simple design.

The first lesson we learned as a team was that every requirement can be cut down to something simpler, until you have distilled the essential minimal “core” accompanied by “layers” that improve one or more aspects of that core functionality. This realization gives you a lot of confidence to meet deadlines, which is a critical success factor for iterative development.

Test-Driven Development:Unit Testing

Perhaps the most important practice that we embraced from the beginning was TDD. Unit testing is something that developers will avoid if they perceive it to slow them down, or if it forces them to switch context from their programming routine. Fortunately, the XP community has created xUnit, a remarkable family of open-source unit testing tools, each implemented in languages that developers use for programming. As we were coding in C#.NET, we used NUnit. It is a very elegant tool that makes unit test creation, management, and execution simple and highly effective.

Armed with the right unit testing tool, we adopted the test-driven approach from the outset. This proved to be an important success factor for us. Writing a unit test in order to “shape” the next small element of code can seem counterintuitive at first, but for us it quickly became a very natural and satisfying way to work. This was reinforced by the XP practices of collective code ownership and pair programming, so that, although the practice was new to us, all members of the team worked consistently and were able to support each other. The very high code coverage this gave us, combined with the very fast feedback regarding unexpected side effects and regression errors, became an addictive element of our process.

Enter the Build Captain

Once we had unit-level TDD ingrained in our development process, and a very large suite of tests, we began to observe and reflect upon our tests and the maturity of our technique. We read articles about good unit testing practices and patterns. We considered how we could get our slower tests to run faster. We learned about mock objects and specialist frameworks for testing ASP pages.

This period coincided with a new member joining our team. Owen was a test consultant who was looking for a change from what he was doing. We knew he was bright and capable, and certainly didn’t want to risk losing him from the company, so we were glad to welcome him to the team. He was appointed originally as our first dedicated tester, although it was clear he had the technical ability to be doing development. I was interested in having Owen help us lift our testing process to a new level–not by being a manual tester to supplement our automated tests, but by implementing processes that helped us as a team to do more and better testing. He became our Build Captain.

Continuous Integration

Owen started by implementing continuous integration(CI). Our “development episode” process to date had been to get the latest code(including unit test code)from the source repository, build the application locally, run all unit tests, then do TDD for a task or feature, and run all unit tests again locally before checking in the code. this was fine up to a point, but didn’t protect us against occasional integration errors caused when simultaneous developer changes were incompatible with each other.

Owen set up a build server and installed the continuous integration tool[56]. He configured it to monitor our source code repository, so that after a ten-minute “quiet period” after any check-in, an automatic task would get all the code, increment the build number(this was something we baked into the app), perform a clean build of the whole application, then run all the unit tests. If the software failed to build or any unit tests failed, the build was marked as broken.

This was visible on the CI tool dashboard, and an automatic email would be sent to the developer responsible for that check-in. A little icon in every developer’s PC system tray turned from green to red. For added effect, the build server would announce its dissatisfaction to the whole open-plan office with a loud Homer Simpson “D’oh!!” As if that wasn’t enough, Owen also rigged up a pair of green and red lava lamps to the server, which duly switched on the appropriate lamp according to the good or bad state of the latest build. (For the benefit of the remote workers in the team, and anyone else in the company, a regular grab from a web-cam pointing at the lamps was put on our team collaboration web page). All this created a highly visible sense of quality and a team culture of putting quality as our top priority.

Green Bar Addiction

In agile development and testing, you are theoretically working through a list of tasks in priority order. Therefore, the new feature you are working on now is logically less important than fixing the defects in what has already been built. For our team, a “broken build” became a stop-what-you-are-doing-and-fix-it situation. We were happy to explain the lava lamps to any curious passerby. But we didn’t want to do that when the big red one was bubbling away. Whatever else needed to be done, fixing the build came first, and even though the build server pointed the finger at the last check-in, it was in the whole team’s interest to pitch in to get it fixed. Once the green lamp was glowing again, we could proceed with confidence in the knowledge that order had been restored.

This aspect of TDD is something of a revelation, even to a seasoned tester and developer:the only acceptable pass rate for unit tests is 100%–this is the only way to get the “green bar” in the unit testing framework. This criterion is absolutely necessary for getting a developer to accept responsibility for a broken build. Since all the tests were passing before you checked in your code, your code must be the cause of any failures–no matter how much you believe your change had “nothing to do with it,” the tests are now failing. If a test suite already has failures in it, a developer would most likely dismiss any new failures as being unrelated to their code or caused by a flaky and unreliable test suite.

Getting FIT:Acceptance-Test-Driven Development

At this stage we were addicted to testing, proud of our healthily growing tree of tests and feeling satisfied to see the “green bar” at the end of every small development episode. However, we now had so many unit tests that we wondered if we were suffering from the syndrome of treating everything as a nail because we only had a hammer. With a smooth CI process in place, we knew we could afford to ramp up our testing further and introduce new tools for specific situations without it slowing us down.

I had been interested for some time in FIT, the open-source Framework for Integrated Testing, having heard it mentioned on various occasions and liking what I saw when I investigated it on the web. Owen, our dedicated tester, soon had us kitted up with a FIT server, and Fitnesse, the custom wiki wrapper for FIT.

The “arrival” of FIT allowed us to reassess the way we viewed TDD. FIT is essentially a collaborative test tool, designed to allow a customer to express acceptance tests in tabular format(alongside HTML documentation of the features to be tested), which the development team subsequently automates with simple code “fixtures” to connect the test data to the system under test.

We changed our process for writing stories so that, instead of documenting them as Word files stored on a server, we created new stories as wiki pages on Fitnesse. The beauty of this is that the story specifications can include the functional acceptance tests embedded in the descriptive documentation. It was easy for Edgar, our customer, to change details of stories and to select and prioritize our solution ideas. With the ability not only to write tests in the wiki but also to execute them and see the results, we were doing TDD at an acceptance test level. This is an important difference from unit-level TDD. For the first time, Edgar had true visibility and access into the details of the tests that prove the behavior of each feature. He did not have to just trust our unit testing, nor did he have to wait to try out features running through the GUI–he could open the relevant wiki page in his web browser, check the tests for completeness and accuracy, create new cases if necessary, and with a click of a button see the test data light up in green if everything was passing correctly. “I’m loving FIT” was the way he summed it up.

Exploratory Testing

As the project matured we rotated a number of testers from the consultancy through the project. One such tester, John, stood out as one who seemed to fit very naturally into an agile project. He was keen, he was confident, and he had no problem with telling developers what he felt they needed to know about the quality of the application. He also tuned in very quickly to where he could add value. He was surrounded by automated tests, but he saw developers being “too busy” to do the kind of exploratory testing that a professional tester is so good at. He put the application through its paces as a skeptical user would, creating interesting scenarios and unusual(but not unrealistic)usage sequences. He considered usability, noting how many mouse-clicks it took to perform common operations. He measured performance. He changed browsers and browser settings, and tried the application through a mobile phone. He checked how good our graphics looked at different screen resolutions.

Importantly, John brought lists of his observations, bugs, and anomalies to our planning meetings for triage. That way, the developers could comment on likely solutions and estimate how long it would take to address each issue. Edgar was then able to balance the cost and benefit of these against other features he wanted to add to the product functionality.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Agile Testing Topics