Upload
tait
View
32
Download
0
Embed Size (px)
DESCRIPTION
Practical ways to measure and track the progress of Agile projects Ensuring that key data is visible in an Agile environment. Dave Browett – March 2013. I am a Project Manager!. These slides are a summary of experience collected over many years – more recently at Micro Focus - PowerPoint PPT Presentation
Citation preview
Practical ways to measure and track the progress of Agile projects
Ensuring that key data is visible in an Agile environment
Dave Browett – March 2013
2
• These slides are a summary of experience collected over many years – more recently at Micro Focus
• I’ve been managing Agile projects in various capacities since approx 2003
• This is not an Agile “primer” – I’m assuming that you all have a basic knowledge of Agile
• Some of this is not for the Agile purist – I have included an appropriate warning…
I am a Project Manager!
3
Scrum is good for team communications
4
But what about inter-team communications?
5
What about communications with key stakeholders?
6
• Should we be attempting to standardize these reports/communications?– Self-organized teams vs consistent metrics across teams
• Providing senior management/stakeholders with high-level reports that allow– Progress against a release to be easily understood– Any “at risk” items to be flagged as early as possible– Any key dependencies/issues to be raised as early as possible
Challenges
7
Should we standardise?– Communication of issues,
dependencies etc– Metrics
• Iteration length• Velocity
Self-organized teams vs consistent metrics across teams
Typically where there are more than one Scrum team any issues between them can be raised and resolved at a “Scrum of scrums”.
How frequent these need to be will differ from project to project – the key thing is to provide this information in a timely fashion so it minimises the impact to other teams iterations
8
Challenge - Providing senior management/stakeholders with high-level reports
Progress against a
release to be easily
understood
- Clearly defined payload
- Ability to calculate how
much of payload can be
expected to be achieved
based on historical/average
velocity
•- Any key “at ris
k” items
flagged as early as possible
•- Any key
dependencies/issues raised
as early as possible
9
• “Our velocity is 40”• “We’ve done 280 story points”• “We’ve done 7 out of 9 iterations”• “We’ve spent 4000 man-hours”
• Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect...
Where are we?
Time - iterations
Velocity =40
Where are we?
Scope - Story Points
280
987
Time - iterations
Velocity =40
Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect...
Scope - Story Points
280
987
360Must Have
Payload =300 Looking Good!
Time - iterations
Velocity =40
Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect...
Scope - Story Points
280
987
360
Must Have
Payload =200
Easy!
Time - iterations
Velocity =40
Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect...
Scope - Story Points
280
987
360
Must Have
Payload =400
Challenged!
MMF
MMF + y
MMF - y
Time - iterations
Velocity =v2
Velocity =v1
Best case
Worst case
Best/worst case delivery range can be predicted
within this zone
Payload Calculation - predictable delivery
Scope - Story Points
15
Predictable Velocity
• Teams need to be able to deliver for each iteration a predictable number of story points
• Obviously this number may vary from iteration to iteration depending on sickness/holiday etc but the key principle is that the team commit to and deliver a number of story points that is related to their performance in previous iterations.
• “See-sawing” velocity is a warning sign – it could mean that the team are– The team are over-committing– The team are not estimating or looking ahead sufficiently– The team are not producing releasable software within the iteration – Beware of “iceberg agile”...
16
• A key aspect of Agile is transparency. • Every iteration is displayed with each story and the tasks
within are updated to show a picture that represents the state of the iteration as accurate and "up to the minute" as possible.
• This transparency will build trust at all levels –– The Scrum team can show progress both in terms of achieved
velocity and demonstrable features– Stakeholders/managers can take confidence from a regular cadence
that provides demonstrable features• But if your team is doing "Iceberg Agile" this breaks down - not
all the planned stories will be completed and the reported velocity will be lower than expected or swing from low to high…
Beware of “Iceberg Agile”!
17
• The team may still believe they are on-track, the carried over stories couldn't be done in a single iteration, the work is still being done and it will all come together one or two more iterations down the line...
• But the transparency has been lost– the work done on these carried over stories is hard to estimate– these stories can't be demonstrated in the review before they have been finished!
• If your team is doing "Iceberg Agile" you will typically see – only a part of what was planned being demonstrated in the review, a significant
amount of work will be under the surface– difficult to assess in terms of progress and not able to be demoed.
• Carried over stories should be the exception rather than the rule and demoing stories should become a key consideration for assessing and accepting stories (in fact sometimes if you are wondering whether you have one story or two it's good practice to think of how you're going to demo the feature)
Beware of “Iceberg Agile”!
18
• Teams that do "Iceberg Agile" will typically– Carry over several stories as common practice– Be unable to demonstrate all planned stories– Have a velocity that see-saws as the credit for carried over stories gets
re-allocated one or two iterations down the line• These teams will suffer from lack of transparency and consequently
it is difficult to predict what they are capable of consistently achieving. Be aware and try to avoid your team doing "Iceberg Agile"!
Beware of “Iceberg Agile”!
19
Velocity calculations across multiple teams – Agile Purist Warning!
• Strictly you can’t simply “add-up” story points across teams (because each team is likely to have different measures)
• Then again – surely it doesn’t make sense to have wildly different sp measures across teams… (find an agile purist near you and discuss!)
• So – perhaps the pragmatic solution is to ensure that sp measures across teams are of the same order
• Assuming the principle above is held then these payload calculations can be used for an entire project across several teams – as a “high level indication”.
20
Bear in mind also…
• Estimation is always an inexact science!• Beware of false precision, “37.5sp remaining”
21
Possible actions for a challenged project
• Increase resource – although adding resource to a team is likely to *reduce* its velocity in the short-term and bringing in a new team is also likely to require ramp-up/familiarisation.
• We can increase velocity on new features by reducing velocity on other things…– Undertake a business review of “critical defects”– Temporary relaxation of Service Levels/SLAs
• Reduce payload – business review of Must Have features
22
Team workload categories
New FeaturesMaintenanceEnhancementsTechnical Debt
Vnew featuresVenhancements
Vmaintenance
Vtech debt
ÞTo maximise velocity on new features we need to reduce velocity on other areas
Maximum Story Points achievable based on average
total velocity
Maximum Story Points achievable
taking into account maintenance
Managing Payload – a balanced release
Payload is not threatened
Payload is in the CAUTION region
Payload is in the AT RISK region
Maximum Story Points achievable based on average
total velocity
Maximum Story Points achievable
taking into account maintenance
Example 1 - Well balanced Release
Estimated MUST HAVE payload
Estimated Non MUST HAVE
payload
AT RISK payloadKey
Maximum Story Points achievable based on average
total velocity
Maximum Story Points achievable
taking into account maintenance
Example 2 – Under committed Release (slack)
KeyEstimated
MUST HAVE payload Estimated Non MUST HAVE
payload
AT RISK payload
Maximum Story Points achievable based on average
total velocity
Maximum Story Points achievable
taking into account maintenance
Example 3 – Over committed Release
Key Estimated MUST HAVE payload Estimated Non
MUST HAVE payload
AT RISK payload
Maximum Story Points achievable based on average
total velocity
Maximum Story Points achievable
taking into account maintenance
Example 4 – Release with non MUST HAVE at Risk with additional Caution indicator
Estimated MUST HAVE payload
Non MUST HAVE
payload
AT RISK payload
Key CAUTION payload
28
• Think about the data that your teams provide to other scrum teams and stakeholders
– Timely and Accurate– Impact of impediments– Think about % maintenance as being capacity which could be diverted to developing new
features • Understand the importance of teams committing to an iteration and having a
measurable team velocity that allows forecasts to be made• Clearly identifying MUST HAVE items makes a payload more realistic,
achievable and with the stretch goal more clearly defined. • Beware of the signs of “iceberg agile”• Beware of false precision when providing estimates• Try to classify your release using the release balancing concept as early as you
can once the MUST HAVE payload is sufficiently defined• Classic Project Management activities - such as calculating the critical path and
understanding dependencies, are still needed in the Agile world!
Take aways
29
Thank you – any questions?
My blog on WordPress - http://davebrowettagile.wordpress.com/