This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.

×

This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.

×

Agile vs Waterfall: Development and Quality Assurance

This is the third in a short series of posts that compares agile vs waterfall and how and why iterative agile development can deliver better results than a waterfall development project; and how it gives back control of the project and more importantly budgets to customers.

As covered in the previous installment, waterfall projects capture requirements up front in a requirements phase and then hand those fully documented requirements to the development team who will then build the software to meet the specification. At the end of the build phase the resulting software is tested and when the quality assurance team sign it off it can be delivered to the customer.

Unfortunately, the customer doesn't get any value from the project until it's delivered and implemented.

Problems with the Waterfall Approach

The classic problem with the waterfall approach to development and testing is that you don't know how long it will take until the project is complete.

This is highlighted by Tom Cargill's Ninety-ninety rule:

"The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time"

The team start with an estimate and plan that gives their anticipated deliver date, but how do you know how much more work there is to do?

You can get some measure by seeing how quickly the team complete each component against their original estimates. However, typically the software hasn't yet been tested so we do not know how many defects are present or how long it will take to rectify them.

Another typical problem is that if development estimates turn out to be wrong and the build phase begins to slip, instead of pushing back the delivery date often the quality assurance period is squeezed.

When testing actually starts, they can find that they don't have enough time to test everything, the defect count is high – especially if the development team were under pressure to complete as close to their original completion date as possible – and eventually as the delivery date approaches (or worst arrives) the team have to admit that the software is not ready.

Waterfall Project Plan Overrun

This is signified in the diagram as the first red delivery milestone.

The software is returned to the development team to fix the defects and another testing phase is kicked off with another milestone delivery.

This continues with failed delivery after failed delivery, with stress and pressure building up, until eventually, hopefully, the product is good enough to release – or the customer just has to accept the level of bugs in the system or write off the whole project: the final green milestone.

The issue here is that even though you might be able to track the progress of the development in the build phase by seeing how quickly the team says that they have completed each component, you do not know until it has been tested what the level of quality is and how long it will take to produce an acceptable completed system.

Even if the development team hit their completion deadline, if it turns out that the quality is not good enough, you will have to spend time rectifying the problems. Only if the quality is acceptable do you have a chance to deliver on time.

Of course not every waterfall project is like this, but with this 'big bang' approach of developing everything first and then doing the testing you're loading all of your risks to the end of the project and giving yourself little room for mitigation.

Iterative development aims to solve these problems.

Benefits of Agile Iterative Development

Each iteration delivers working software that has been fully tested. The development team delivers their set of stories in the iteration and they are tested. The story is not complete until it passes all of its tests and is accepted by the customer.

Iterative Project Plan

The diagram shows that we're building, testing and delivering each iteration. While the early deliveries may not provide enough functionality for the customer to have a viable product, they do produce working software that the customer can look at and verify that they are happy and there haven't been any misunderstandings.

This approach means that you get a good handle on both estimates and quality up front. You get actual metrics for how many defects are found each iteration, you get actual metrics for how well the estimates for both development and testing stack up.

If it turns out that the number of defects is too high then you can take remedial action straight away.

We aim to fix any defects in existing functionality before we start any development on new functionality. This is called the "zero defect" policy, i.e. that fixing defects is more important than new functionality, so we always aim to prioritise those fixes first. It's not that we don't expect any defects, it's that we always want to fix them so that we have zero outstanding defects.

The consequence of this is that we always know the level of quality and we concentrate on getting the highest priority functionality right. If our estimates are wrong and the project is taking longer to deliver than originally anticipated we have the opportunity to drop lower value requirements from later iterations to allow us to concentrate on delivering the requirements that have the highest value.

This means that the customer is always in control of the scope and budget and gets a high quality result.

Summary: Waterfall vs Agile Iterative Approach

In waterfall, because it's an all or nothing approach, there are few metrics to show how much longer a project will require to deliver an acceptable system.

Waterfall Project Plan Missed Deadline

At this point in a waterfall project, when you've failed to hit the original delivery milestone and have expended all the original budget, you still don't know how much more the project will cost in order to achieve a working system. The customer has to make a judgement call. Will another few weeks of development result in a successful project, or will they have to spend more after that? At some point they may have to decide that a substandard system is better than nothing, or worse that the whole project is a failure with no return on the investment at all.

In agile, because there is working software delivered at each iteration, the customer can see the progress that is being made and at what rate.

Iterative Project Plan Iteration 6

At this point in an iterative project, if the team have reached the original anticipated completion date, but the estimates were wrong and they've only delivered a subset of the overall scope, then the customer must also make a judgement call. However, it's not all or nothing in this case.

They've received fully tested and high-quality, working software at each iteration, so they know that what has been delivered up to that point can be used and has value. If there is outstanding functionality that the customer needs, then the delivery track record of the iterative project will give them confidence if they decide to invest further to implement that outstanding functionality.

In the final installment, I'll look at estimation, planning and tracking.