Project A: One Cannot Live by Scrum Alone Agile Testing

This project was to develop a replacement trading system, which is now common because many organizations are moving to second- and third-generation systems. Scrum was chosen to manage the development work.

A fortnightly Scrum cycle was chosen with daily Scrum meetings. Analysts, developers, and testers attended the Scrums and gave their reports based on the following three questions: What did I do yesterday? What am I doing today? Do I face any problems where I need help?

Analysts decomposed specifications into stories and all could monitor progress on the storyboard in the project office. Burndown charts were displayed so everyone could see the velocity and projected completion dates of the different development parts.

The first problem we found was that charts were unreliable– pecifically projected completion dates. They would remain static for some time and then slip suddenly. Surely one purpose of Scrum is to identify slippage quickly? The problem appeared to be in calculating velocity with accuracy. As an analogy, the onboard computer in my car has to solve a similar problem to predict the range of the car. It knows how much petrol is left in the tank and past fuel consumption. The question is, how do you use past consumption in a predictor algorithm? Do you take the last 25 miles’ driving as representative, or 50 or 75? Likewise, do you take the last Scrum cycle or use more? Furthermore, how do you normalize story counts so they are comparable across cycles, as stories ranged in size by a factor of 10?

Another effect we observed was that of sucking the pipeline dry at the end of a cycle. To maximize velocity, staff would be assigned to clearing as many stories as possible. For example, analysts would suspend work to focus on QA sign-off. Although the sprint velocity looked good, it meant delays had been created that were not visible in a plan and could delay later cycles.

However, the larger problems were as follows. Trading systems by their nature interface with many other systems. There was considerable work being done by these systems’ owners to develop and test their interfaces with the routing system. They had to plan and execute their work and ensure that it was ready in time for final integration and user acceptance testing. Although analysts worked with external organizations to define and agree the interface specifications, there was no coherent development and delivery plan. It was left to testers to poll interfacing systems owners to determine availability. This was problematic as there was no project plan and reporting mechanism that showed all these activities and their dependencies. In short, it meant that slippage in external work often came as a surprise.

One conclusion I draw from this experience is that Scrum, although easy to describe, is hard to implement without proper calibration to measure velocity accurately. More importantly, any project that has external dependencies requires a plan showing them and an associated reporting mechanism driven by the project office. In this respect Scrum appears limited and the danger is that people do not understand the limitations.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Agile Testing Topics