Have you ever estimated something exactly right? I have, and it's the greatest boast I as a developer can ever make.
If there is one thing developers are good at it, it's massively over-estimating how quickly we can implement something. Ask any Product Owner, and they will cheerlessly list the myriad times a developer has let them down. Ask a Scrum Master when was the last time one of their scrum teams completed the work they forecast, and they will laugh in your face (or at least they laughed in my face).
Under-estimation is so normalised that people in customer-facing roles have a standard formula for transforming your estimates into something they can tell the customer without losing trust. Ask them, and they'll tell you that they take your number and multiply it by three. That's how unrealistic your estimates are. That's how wildly detached from reality developers like me are. And even that number is often inaccurate.
Estimating is hard
I get it. Estimating is hard. Developers are often being asked to build things we've never built before, and that may never have been built before. We're asked to predict, some time in advance, how long it will take to make something we don't really know how to make yet, and haven't really thought about.
The trouble with programming (especially for web) is that there is great uncertainty attached. How many times have you developed something using Chrome on Windows and found out afterwards that it doesn't work in Safari on iOS (or Chrome on iOS!). How many times have you developed something locally, only to find that it isn't working as expected in the Test Environment (or Production Environment)? How many times has a tester found a bug in your perfect and elegant solution, which, when you fix it, turns it into a complex, Frankenstein's monster?
And these are just the things that can go wrong when you know exactly what you're going to do! How many times has the scope increased while you're in the middle of developing a feature? How many times have you realised something you never thought about only after you've started to build the feature? How many times have you thought you knew what to do, only to find out that actually you can't do it, and it's a lot more difficult than you thought?
We're to blame as well
But developers can't just blame the complexities of the job. We give a lot of just ridiculously stupid estimates as well. I continuously hear that something will only take "two minutes" - nothing takes two minutes.
Two minutes isn't enough time for you to put the task into the correct column in JIRA, update your develop branch, and make and push the new branch. Your two minutes are up before you've even written a single line of code. You haven't even found the place you need to update, or observed the current behaviour. When you estimate "two minutes", you know that is not realistic. I get that you're trying to say it won't take long, but it's not an actual estimate and it tells me that you don't actually know how long it's really going to take. Add in all the stuff we've just discussed about the complexity of the job and do you still really think it's only going to take two minutes?
Even the slightly more realistic "half an hour" is a bad estimate.
Sure, things like changing the colour of something, or updating a text can take half an hour. Half an hour is the minimum estimate you should make, and you should make it for the absolute minimum effort tasks. The problem with estimating “half an hour” is that it's the same as estimating "two minutes" - you haven't actually thought about the task and what's involved. You've just shoved your finger in the air and felt the breeze. This is not an estimation method that will give you accurate estimates.
We need to accept that we're bad at estimating. We need to accept that the data on JIRA that shows us failing to complete a sprint time after time is a reflection of our own estimation abilities. The excuse you come up with in your retrospective each time for why you didn't deliver may be correct on the surface, but the fundamental problem is with how you estimate. Unless you realise that you're bad at estimating, you won't actually change anything.
For some people, accepting that they are bad at estimating isn't actually enough. For some people, being bad at estimating is just a sign that they should give up on estimating altogether. Unwillingness to improve doesn't solve the problem. Your product owner needs good estimates, your sales team needs good estimates, your support team needs good estimates, and ultimately your customers need good estimates. I cannot imagine any other aspect of work where, having discovered you're not good at it, you can just refuse to try and expect it to be OK.
Related to refusing to estimate is the belief that you will get better at estimating with time. On the surface, this appears to make sense, and certainly I believed it for a while - you get better through practice, so the more you practice estimating, the better you will get, right? Disappointingly, no. I have seen developers who not only have decades of experience programming, but decades programming the same exact programme, who still give estimates that are way off. This is a great example of the difference between practice and deliberate practice. If you make your estimates by sniffing the air, then it doesn't matter how many times you've done it - you're still just sniffing the air; you won’t improve.
Finally (I'll give us a break after this point, I swear), we have a tendency to forget or not include everything that goes into the task as part of our estimates. There is so much more to any feature than just implementing it. Personally, I am super guilty of this. I've forgotten about testing, accessibility, communication with stakeholders, refactoring code before the code review, refactoring code after the code review, necessary functional creep, unexpected problems, changes due to misalignment, documentation, and so much more!
Fixing your estimates
Let me tell you how my team got better at estimating. First, we accepted that we were bad at estimating. It wasn't actually difficult - the numbers on JIRA speak for themselves. If someone on your team isn't facing up to reality, show them the data from your own sprint management software. The numbers don't lie.
Having established how terrible we were, we made a long-term commitment to experimenting with estimation. In your retrospective, make getting better at estimating something that you want to improve. Know that this isn't a one-sprint commitment, and know that because it's an experiment, it's totally OK to fail, as long as the trajectory is mostly upwards. In fact, you probably will fail the first few times. Then you might do really well, and the next sprint really badly. But it's an experiment. You're finding out what works, what doesn't and that way improving. This is deliberate practice.
There are different ways of improving your estimates. The discussion above may have inspired some methods you'd like to try. One thing we did was to keep track of our estimates, keep track of actual time used, and make a spreadsheet where we could see exactly how much we'd mis-estimated by. This way, we had real, actual numbers which we could use to inform and refine future estimates. If we had a future task that was similar to a past one, then we would know to estimate at least the same amount as we used on the past task.
Something that may be controversial is that we were not afraid to adjust estimates as we learnt more, even if we were in the middle of a sprint. If any team-member thought at any time that a task was wildly underestimated or that we would have trouble finishing the sprint work, then we would have a touch-down meeting, re-estimate the task, and decide what to do: stop the sprint, extend the sprint to compensate, remove tasks from the sprint to compensate. Sometimes, we would call a touch-down, and work out that everything was fine, and this is fine too. It's better to have the touch-down and find out you're on track, than to not and fail again to deliver.
So why have I described this as controversial? Some people think this is "cheating" or "gaming" the system. Some companies evaluate teams according to how much of their sprint they deliver, and bonuses and promotions are given out accordingly. Your team has made a commitment to be judged on delivering certain tasks in a certain time frame, so if you change the parameters, then you're "cheating" in order to "win" the bonus/raise/new title. The point of estimating is actually to give stakeholders a degree of certainty and clarity about when things will be ready, so that they can plan the things they need to plan, and manage customer expectations. Adjusting plans as early as possible, and improving the quality of the estimate is just good communication and builds trust. Your customers would much rather find out that something will be late as soon as possible, rather than find out the day before launch.
The final thing we did was add in extra time to account for the fact that we know our estimates will probably be wrong. This may be termed "uncertainty" or "contingency" or "slack", but whatever you call it, the point is to anticipate unexpected events, and anticipate the things you couldn't think of. Every sprint I've worked something unexpected has happened, and we've blamed failing to deliver on that unexpected thing. Even if it's true that losing two developers to a crisis is what made us fail, we can mitigate failing to deliver by having this extra time built-in. The amount of extra time you estimate should be in proportion to the uncertainty involved in the task. The more certain you are, the less extra time you'll probably need; the less certain you are, the more time you'll need to add on. As mentioned above, this is about providing the best possible estimates to stakeholders, and including an estimate for unexpected events will result in a more accurate picture of how long something is actually going to take. If you don't need the extra time, then just deliver early and enjoy seeing the smile on your Product Owner's face for once.
You could boast, too
Next time you have a retrospective and the Scrum Master asks what you can improve, tell them you want to get better at estimating. Tell them you want to experiment; you want to try a few things to give more accurate estimates. You might get some pushback, but assure your teammates that they won't be required to do anything different at first, and that no-one will be personally singled out. You'll just be building up a catalogue of estimates, and reporting on how accurate they were as part of the next retrospective.
I can't guarantee that doing these things will result in your team giving perfect estimates every time. But your estimates will get better. And as your estimates get better, your estimates will be trusted more. As your communication gets better, the times that you fail will be seen as times of honesty instead of times of deceit. And one day, by the law of averages alone, you will estimate some task exactly right, and you too can then make the greatest boast a developer can make.
Top comments (0)