Definition of an Agile Testing SOA Process Agile Testing

Now that we’ve spent some time dwelling on the challenges faced by teams producing SOA solutions, I’d like to tell you a little bit about my AgileSOA process, which we’ve been using for many years to deliver SOA solutions for our customers. I’ll do so by describing each of the key aspects that I think have made it work for us.

Specialized Subteams in the Iteration Workflow
Using the RUP as a framework, AgileSOA has a specialization of roles in the project team. These are organized into subteams, each having well-defined work products that they are responsible for, and each having their place in the overall iteration workflow. In this way, the “workers” in the team are divided up quite sensibly(if not a bit predictably) as follows:

  • The requirements team is responsible for creating specifications that describe the requirements for the software solution.
  • The design team is responsible for interpreting the requirements specifications and in turn creating corresponding design specifications. This includes describing the overall architecture of the software.
  • The implementation team, as any developer will tell you, is where the “real work” is done. Not only is the team responsible for creating the solution code,
  • AgileSOA iterations across time boxes

    AgileSOA iterations across time boxes

    but also for producing developer tests that will live with the code to be used for regression testing.

  • The test team is responsible for rooting out any defects that exist in the code produced by the developers. These defects are passed back to the implementation team for fixing.

The SOA solution is delivered incrementally using iterations, with each iteration planned as four end-to-end time boxes, with each time box being used to focus the efforts of the four subteams just described earlier. In practice this looks a bit like what is described in . The net result is that, in any given time box, each of your subteams will be working on a different iteration–each being one ahead of the other in the sequence of requirements, design, implementation, and test.

This organizes the work done as part of each of the project iterations. Note that not all the work is organized this way. As an example, you may have business modellers producing models that provide a useful business context for the solution in terms of process models and information models. This would be done in advance of your iterations. Also you will have people responsible for setting up and executing the performance and acceptance tests of the solution. Most of this happens after your iterations have executed because they need to test the entire solution(all increments)that will be released to the end users.

Structuring Requirements for Iterative Delivery

Requirements are crucial to our development process as they are used to scope out the work that the overall project life cycle needs to deliver but also to divide this work into iterations.

We use two mechanisms for scoping requirements for our SOA projects:

  • First lightweight lists of needs/features are used to quickly produce a view of what is in scope for the project life cycle and, just as importantly, what is out of scope.
  • Second, these features are traced to use cases, which become the primary means for structuring the project requirement specifications and scoping the contents of each iteration.

Use cases contain flows

Use cases contain flows

So why do I think use cases make such ideal units for our requirements specifications?

Let’s look at two useful properties:

  • They are carefully factored nonoverlapping partitions of system functionality. Each system scenario that the system should support will end up in a single use case. This means that by choosing the use case as the primary unit for grouping our system behavioural specifications, we avoid overlap in coverage between specifications.
  • Use cases can be further divided into use case flows. Each use case has a basic flow which should describe the simple scenario. Scenarios that deviate from this basic flow are covered by describing their deviations as alternative flows(alternatives to the basic flow)

These two properties make use cases the ideal format for our requirements specifications. First, the fact that use cases don’t overlap in their coverage of the systems scenarios means that they make good units for planning solution increments. You know that by allocating the full set of in-scope use cases to iterations you will have covered all the required functionality without any duplicated effort across iterations. Second, the fact that we can break up use cases into smaller parts means that we reduce the problem of having “fat” specifications that don’t fit into a single iteration. It is possible to assign a use case’s basic flow to an iteration, and then assign groups of its alternative flows to later iterations.

Use case plays planning example

Use case plays planning example

I define a use case play as follows. Each iteration that a use case is assigned to contains a play of that use case. That is, a use case play is one or more flows of a use case assigned to an iteration for delivery. For planning purposes, it is normally a good idea to split a use case up into multiple use case plays to manage the risk associated with delivering the use case. For an idea as to what this looks like in practice, shows a set of use case plays from an example iteration planning work product. This example shows seven use case plays across two iterations. To understand how this works, we note the following:

  • Iteration 1 contains two use case plays–the first contains both the basic flow and an alternative flow(order information is incomplete) of the Capture client order campaign use case, while the second contains just the basic flow of the Request carrier certification test.
  • Request carrier certification test has two use case plays–the first is assigned to iteration 1 and contains just the basic flow; the second is assigned to iteration 2 and contains an alternative flow(issues with certification).

Specifications That Are Rich but Get to the Point

One of the popularist aspects to agile methods is the disdain with which they treat lengthy, verbose specificaton documents. And quite rightly so! Vague, ambiguous specifications defeat the purpose of putting together specifications. Worse still is when the specifications attempt to make up for quality with quantity!

Specifications need to be concise, clear, and unambiguous for them to have value. But also they must be easy to change–and this would suggest that they are no more “wordy” than they need to be. In the AgileSOA process, the focus is on a key set of work products that attempt to achieve clarity and unambiguity while being as concise as possible. This is achieved by making the semantics of the specifications as rich as possible so that much can be said with little effort.

For SOA solutions, the key thing that we need to specify is the individual service contracts that are consumed by the various service consumers, and provided by the service providers. shows a set of work products that contributes to our goal of creating quality service contracts:

  • The domain model is probably the simplest and yet most widely used of these work products. It provides a structured view of the business information in the business domain. This is important as our service contracts will need to pass information around, and it is crucial to understand how this information is structured for you to have a good-quality set of service contracts.
  • The domain model is coupled with the business process model, which describes the flow of the business processes. This is important as the services that you produce are internal aspects of a solution that needs to support a business. Any flaws in the understanding of the business and its processes can have a knock-on impact on the solution requirements, which in turn will have a knock-on impact on the service contracts.
  • Having taken a look at the in-scope features for the SOA solution, a set of use cases is created in the use case model (i.e., System Use Cases), which provides a factored view of the requirements expressed by these features. The use cases should be cross-referenced against both the business domain types in the domain model, so it is clear what business information they will act upon, and the tasks in the business process model, so it is clear what business tasks they will automate. Use case specifications provide us with descriptions of what solution needs to do, and the service contracts will be the focus points of the collaborations that provide this behavior.
  • The external systems model contains specifications of any systems that are external to your SOA that your SOA solution needs to interface with. The use cases in the use case model should identify which use cases these systems are involved in; that is, the system actors(as opposed to human actors)in your use case model should have corresponding external system specifications in the external systems model. Certain service contracts in our solution will have their behavior provided by service providers that integrate with these external systems. Therefore, an understanding of these external systems is crucial to ensure that the service contracts for these service providers are suitable for their integration.
  • The service model is where the design for your service-oriented solution is captured and is ultimately where the service contracts live. The service contracts are specified using service specifications, which define the interface points between service consumers and service providers. These are organized into service-oriented systems. The behavior of these systems is described using service interaction specifications, which are organized into service collaborations that match the use cases one-to-one.
  • Key AgileSOA specification work products

    Key AgileSOA specification work products.

    Key AgileSOA specification work products

    Requirements and design specifications

    Requirements and design specifications

    Service Interactions

    Let’s briefly consider these work products in the context of the subteams working in our iterations:

  • The requirements team writes a use case specification describing the steps in each of the flows that are a part of that use case play.
  • The design team writes a service interaction spec that provides the behavior described by the matching use case specification.
  • The implementation team creates and tests an implementation of the requirements and design specifications(i.e., of the use case specification and the matching service interaction specification).
  • The test team tests the solution implementation to ensure that it matches the specifications.

Specifications Structured for Change

Not only must the specifications be concise in order to make them easy to change, but they must also be structured to make changes easier. In order to do this, for any given change we have to achieve the following:

  • make it easy to quickly identify the various specification sections that need to change,
  • make it easy to change those and only those sections, and
  • avoid having to make the same change in multiple places.

Some examples of this structuring follow:

  • Each use case specification is clearly divided up into each of the basic flow and alternative flows. Once it is known which flows are affected by a change, it is clear exactly where in the document these changes should be made. Use case flows can be inserted by reference instead of by copy-and-paste.
  • The interactions in the service interaction specifications match these flows exactly–there is a separate service interaction diagram for each use case flow. This means that, for any given use case specification change, it is easy to see exactly which matching parts of the service interaction specification need to change. As with use case flows, service interactions are inserted by reference instead of copying and pasting.
  • Each service consumer and service provider lives in its own package, and these are separate from the service-oriented solutions that use these. This means that the consumers, providers, and systems can all be modified and pop in and out of existence individually. Also, service specifications are used by reference and therefore changes are automatically reflected in the service interactions they appear in.

Workflow Tooling to Help Track What’s Going On

Most of our projects consist of hundreds of individual use case flows. According to the process, there are four high-level pieces of work to do for each of these use case flows–requirements, design, implementation, and test.

That means that, for a 200–use case flow project, there are at least 800 individual pieces of work that need to be done. As there are sequencing dependencies between these 800 individual work items(requirements needs to be done before design, design before implementation, and implementation before test), it’s crucial for a project manager to know what the status is of each of these work items so that any bottlenecks or other planning problems can be spotted. We use workflow tooling to help out here. This has two main benefits:

  • It’s easier to track progress for the individual, the subteam, and the project team as a whole.
  • It’s easier for each person to know what work is currently on their plate and what is coming down the line.

Handle Change While Avoiding Unnecessary Change

Being agile is about being able to handle change well when required. However, we still want to avoid unnecessary change as much as is possible as it causes unnecessary expense.Two types of change exist:

  • extension –it already does X and Y. Now need to also make it do Z.
  • modification –the way it does X is incorrect. We need to change it.

Now the first kind of change is unavoidable in our iterative development world. And this isn’t a bad thing. Each iteration will be adding new functionality to what was produced by the previous iteration. However, we want to avoid as much of the second type of change as possible. I’ll subdivide this modification type of change into

  • modifications to the requirements –changing the way the solution behaves, and
  • modifications to the design –not necessarily changing the way the solution behaves, but rather changing the shape of the implementation.

We avoid unnecessary modifications to the requirements as follows:

  • Create a shallow but complete view of the requirements early on, and refine that later during the iterations. This means that all requirements have at least been considered before worrying about detailing any of the requirements.
  • Cross-reference requirements against the business domain model and business process models. This is a useful exercise to pick up issues.
  • Tackle difficult use cases in early iterations. These are the use cases that are most likely to have a knock-on effect on other use cases once you get into the detail.
  • Showcase increments to end users as soon as you have working increments. Anything learned from these showcases can save time for those use case plays that haven’t been assigned to the requirements team yet. We avoid unnecessary modifications to the design as follows:
  • Create a shallow but complete view of the design early on and refine during the iterations. This mean the overall architecture can be assessed before any of it gets refined.
  • Build prototypes early on to validate the design.
  • Design focus should follow requirements focus. Focus your design work on those parts of the solution where the requirements are most solid.

Risk Mitigation on Integration Projects

With any software development project there are the risks that the solution might not do what the end users want, that it might not do it in a way that suits their pattern of working, or that the technology that you’re using to build the solution doesn’t really suit the solution. But integration projects bring along a whole new bag of risks.

To provide a few examples:

  • What if the mainframe application experts we’ve been provided don’t understand their own applications as well as they tell us they do?
  • What if the route planning software that we’ve bought doesn’t actually work the way that the Application Programming Interface(API) says it does?
  • What if the interface specifications that we have for our accounting package aren’t up to date or complete? These are the kinds of risks that, if they materialize, can bring an integration project to its knees.

For our projects we try and achieve two things:

  • Let’s make sure that we have a clear description of what the system does.
  • Let’s make sure that what it actually does matches this description.

For the first of these we use the previously mentioned external systems model. In this we capture a concrete view of the interfaces that we need to interact with and the information that the system holds. This will ensure that the way that we plan on integrating with the solution will work and also that it can be described to the developers.

Second, before we try and design against these interfaces, we write tests that verify that the system works the way we think it works. It doesn’t help to put in design effort against an interface specification that is incorrect. A large amount of the risk is taken out of integration projects by creating interface verification tests early on in the project.

Usage of SOA Design Patterns to Ensure Extensibility and Flexibility

The success of your SOA solutions over time will be measured by how well these solutions handle change. How easy is it to add new functionality to the solution?What impact does this have on other solutions? Does change cause a reduction in the quality of service offered by your solutions? Does it compromise their behavior?

The key to ensuring this success is good design. And one of the keys to good design is to have a good set of design patterns that are consistently used across your solutions.

An example list of these is provided in .

Estimation and Planning

To make our iteration planning effective, we ideally need an estimate for each of the work items on the project. By our previous calculation, a project of 200 use case flows will have an estimated 800 work items. We need a way to quickly get useful estimates for each of these 800 work items! The procedure we follow is a simple one:

  • For each use case flow, the team places it into one of the categories of complexity shown in . The factor to the right is used to adjust estimates. So a Complex is seen to be twice as much work as a Medium Complexity, a Simple is half as much work, and so on.
  • We then have some basic estimates(in ideal days) for the work of each of the subteams in order to deliver a Medium Complexity use case flow.
  • The result is a set of estimates for each of our work items.
  • Example of SOA design patterns

    Example of SOA design patterns

  • The use case flows e each assigned to an iteration. In doing this we can assign each of the work items to a time box. So if UC17.1 is assigned to iteration 1, then the requirements work for it is assigned to time box 1, the design work to time box 2, the implementation to time box 3, and the test to time box 4. Similarly, if UC18.1 is assigned to iteration 3, then the requirements work will be in time box 3, the design in time box 4, the implementation in time box 5, and the test in time box 6.
  • This allows us to quickly sum up how much effort is required in each time box for each team and, therefore, how many resources are required in each team. Using this simple mechanism, the planning factors of number of resources, number of time boxes, and amount of in-scope work can be adjusted until the best plan is found.

Complexity ratings

Complexity ratings

Estimates for iteration work items

Estimates for iteration work items


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Agile Testing Topics