This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.


This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.


Agile vs Waterfall: Estimating, Planning and Tracking

This is the final part in a short series of posts that compares agile vs waterfall and how and why iterative agile development can deliver better results than a waterfall development project; and how it gives back control of the project and more importantly budgets to customers.


In any project, whether it is a waterfall or an agile iterative one, estimates are very important. They provide the basis for the project budget and scope and are what progress is tracked against.

In a waterfall project I would argue estimates are significantly more important, since they lead to decisions about the scope at the start of the project and problems with estimates can lead to overruns for the whole project.

It's less critical to get the estimates right in an iterative project, because if they are wrong then it means that the customer gets less than they wanted for the budget they have agreed – not ideal, but not disastrous.

Research suggests that we're not very good at estimating.

Magne Jørgensen and Stein Grimstad from Simula Research Laboratory in Norway did a study in 2006 on how poor we are estimating in the face of the information provided for the estimation session. (presented in Munich 2008)

The findings are interesting:

Estimation Failure Specification Size

When two estimating groups were given the same specification, for the first group it was presented on a single page and for the second it was presented over seven pages. The second group came up with an estimate almost half again as large as the first group.

Estimation Failure Irrevant Information

When two estimating groups were given the same specification, but the second group were given additional irrelevant information – such as what other software packages were installed on the end user's desktops – the second group came up with an estimate that was almost twice as large as the first group's.

Estimation Failure Anchoring

In this case, three groups estimated the same specification. The first was the control, estimating solely on the provided specification; the second group were told that the customer, while having no understanding of software development or the effort involved, thought that the project was about 50 hours in size; and the final group were told the same as the second except that the customer's estimate was 1000 hours.

The control group estimated 456 hours. The second group, "anchored" to 50, estimated 99 hours: a reduction in 80%. The final group estimated 555 hours: an increase of 20%.

These experiments show that we can't rely on absolute estimates, and that we need to be careful about what information we provide to the people doing the estimating.

What can we do instead?

Relative Estimates

So we're bad at absolute estimates. It turns out however that we're pretty reasonable at relative estimates. Looking at the two glasses of beer, we might have difficulty telling exactly how many millilitres are in each glass, but we can say that we think the second glass has about half as much beer in it as the first.

Beer Glass Comparison

It seems that this also applies to estimating the size of a requirement. We might be bad at estimating how many hours a particular requirement might take, but we are pretty good at saying for example requirement A would take twice as long as requirement B.

Planning and Tracking

Let's now consider how estimates are used and how we plan the project and track progress.

In a traditional waterfall project, the development team will come up with a set of estimates for all the tasks that they anticipate doing and the project manager will create a project plan for these with all the dependencies mapped out and team members assigned.

After a lot of work the project manager will get an anticipated delivery date.

Unfortunately, this is where the problems start.

When a team comes up with their initial estimates, they know that they are probably not accurate, and they will typically give a range. For example: "This task will probably take about 10 hours to do, with a best case of 8 hours but a worse case of 16".

Often when the task is put into a project planning tool, the project manager has to pick a single number and will in that case enter 10 hours. The project plan will give a false sense of accuracy. However that is typically the only way the tool can work so project managers put up with it.

As the project progresses, the project manager keeps track of each task and how long it takes and plugs the actual effort in the plan. A good project manager might also go and update all outstanding tasks to change their estimates based on how accurate the previous ones have been. Typically though this isn't done because it's a laborious exercise to keep updating the estimates for the outstanding tasks.

We've probably all seen how much time a project manager spends working on the project plan. Often, it seems like a full time job just to keep the plan in the project planning tool up to date.

In an agile project, we recognise that this false level of accuracy and detail isn't really providing much value and it is a big overhead. It is better that a project manager spend their time actually managing the project rather than wrestling with a planning tool.

Instead we plan and track at a much coarser level, that is at the story level. We don't estimate in hours or days, we estimate using relative sizes, for example T-shirt sizes: Stories are estimated as small, medium and large (and maybe extra small and extra large).

We then do detailed estimation of some typical small, medium and large stories to get an initial calibration for how long we believe a story of each size would take – including a best to worse case range.

To get the expected delivery date we do some simple maths based on team size, the set of stories required and their size and the initial calibration of the estimates.

Instead of a detailed project plan we use a simple 'burn-up' chart. It shows the amount of functionality required to deliver the project's scope against the time to deliver it. On the chart we show a cone that indicates when we will deliver the functionality at the best velocity, at the expected velocity and worst case velocity.

Planning Burn Up Chart

The velocity is a measure of how fast the team expect to deliver the stories.

Now as the project progresses we can track each story as it is signed off by the customer. We use actual metrics of our progress to update our projected velocity through the rest of the project.

Planning Burn Up Chart Tracking Progress

Using this direct feedback mechanism we get a good estimate of how long the remaining functionality in the project will take.

Essentially, we track how long all the small stories we have done so far have actually taken and assume that all remaining small stories will take the same amount of time, similarly with medium and large stories. Some stories will take less time, some more, but the key point is that we're using actual data to extrapolate when we'll deliver.

Instead of spending their time updating a project management tool, the project manager can use these simple metrics to keep track of the project's progress. Additionally, the customer can easily see how the project is progressing and make decisions about budget or scope more easily.

If the achieved velocity decreases, then the anticipated delivery date will move further out and the customer can decide whether the required scope still fits within their budget and whether to increase it or reduce scope.

In addition to the target line (representing the complete scope of the project), a line for the minimum viable product can also be drawn, which will allow everyone to see when the team will be able to deliver that. If the customer has a stretch target of 'nice to have' functionality then that too can shown on the chart.

Finally, as scope changes this is shown by adjusting the target lines up or down, giving a clear visualisation of how both the project is progressing and how the goals have changed over time.

The burn up chart is a simple, yet powerful visual tool, and most importantly it doesn't take up a full time job to keep it up to date.


This series of posts, I have explained some of the differences between waterfall and iterative agile projects.

Obviously not all waterfall projects fail and not all agile projects succeed. However, my experience strongly suggests that using an incremental, iterative and agile approach will give your project a better chance at success.

An agile project aims to:

  1. give the control back to the customer.
  2. to move project risks from the end of the project to the beginning, allowing customers to make the right decisions at the right time and for the most efficient use of their budgets.

Customers get:

  1. to decide what gets delivered and when
  2. immediate feedback on quality
  3. working software early and often
  4. to see the progress that is being made
  5. the information that lets them understand their budget, scope and quality at the tips of their fingers