I have had certain situations where I gave how much time I thought it would take, usually fixing a bug or adding a new feature, but it took way longer. Do you have some tips in mind?
I have had certain situations where I gave how much time I thought it would take, usually fixing a bug or adding a new feature, but it took way longer. Do you have some tips in mind?
For further actions, you may consider blocking this person and/or reporting abuse
If you think it would take
X
amount of time, then:Double
the time if you have done it before and know all the details.Triple
the time if you have done it before but not sure about every detail.Quadruple
the time if you have never done it before.It works for me. Hope it helps.
Nice, will keep in mind!
This is so accurate.
I tend to only give hard estimates when I have all the information I need to make a informed guess. Remember, time estimates are an educated guess, but we usually don't know all the variables when making that guess so as DHH says, "it's just a guess". If you make it clear to who you are reporting to that this is just a guess because you don't have all the information then they have no right to give you grief.
Makes sense. Thanks for the idea!
Are you estimating just the coding aspect, but forgetting the full SDLC?
Oh yeah, 15 minutes to change the code in an editor, but what about time to branch, version control, test, deploy, etc. You're likely short-changing yourself because you are only thinking about the code change and not the full scope of what you need to do to actually get that task complete.
I see this a lot with developers new to estimating. "Sure! It'll only take me 15 minutes to make that CSS change." Oops, forgot I have to update some build settings. Run it through linting. Write a pull request. Code review. Deploy time. Etc.
I have a habit of testing immediately after I write what I can define a module. So maybe breaking the flow of thought before linking the modules leads to the unexpected delay.
I look at tasks estimates three ways. There's what I call "inside time", and "outside time", and "wall clock time".
My idiosyncratic "inside time" is: how much time will it take for me to code the task. This is my butt in the chair, typing on the keyboard. Including upfront learning time (if necessary), and running the code under the scrutiny of the debugger, verifying that it works to my satisfaction. For that place that I worked where we did unit testing (TDD-style, C#, NUnit, NCrunch... oh my, that was awesome, made unit testing -- dare I say -- FUN), it also included the coding for the unit tests.
My idiosyncratic "outside time" is: all the inside time PLUS all the time that I have to sit in meetings, stand up in the Scrum stand up meeting, fill out HR surveys and watch HR videos on <insert HR hot topic of the month>, go to my one-on-one meetings, go to interdepartmental meetings (usually because our team consumes some technology from another team). This time includes all the overhead time of being an employee in an organization. This is the time that goes in the task on the Scrum board. (NOTE: I find this to work far better than trying to adjust each team member's capacity to factor out that overhead.)
My wall clock time, which isn't idiosyncratic, is start-to-finish time. If I have a 1 hour (inside time) task, which after factoring in the overhead (outside time) is 2 hours... but it will take 5 days to go from start-to-finish to actually complete the task because it will be blocked periodically and has lower priority than swarms of crasher bugs and fire fighting drills. This is the time that the product owner and stakeholders actually care about.
I've been at places that expected estimates to be small, so tasks should be at the granularity of 4 hours. And management's expectation is that those estimates are within 10% of actual. I found those expectations to be unfeasible and unreasonable, but c'est la vie. Also creates a lot of busy-work overhead for spec'ing out all those micro-sized tasks. But that's what micro-management demanded.
I've also been at places that expected estimates to be just a rough ballpark, and no one got cranky if the "8 hour task" turned out to be 1 hour, or 16 hours. That helped not get bogged down in stressing over the tracking, and focus instead on the work.
Joel Spolsky Painless Software Schedules is a good read, but he says DO NOT READ IT. Instead he now advocates Evidence Based Scheduling. Goes to show that even Joel Spolsky can change his mind. ;-)
Will have a read, thanks!
Experience plays a big role in estimates. I'll typically break things down into manageable "work packages". Then I'll leverage my knowledge and experience by asking myself "can I finish this in one day?" If not, I break it down further until I can comfortably say yes.
Larger features or projects are very difficult to accurately estimate because of scope change/creep and not always being able to account for the unknown.
Well I'm at the start of my career (I'm an intern), so I'll probably know better as it goes. Thanks for the advice!
Oof... Depending on your customer, the technical aspects of solving a give problem will account for the least amount of the actual time to identify the problem, come up with a fix, test and deploy it.
As a multi-discipline IT consultant, "how long" has always been nebulous. At a prior employer, when our partner-liaison/PM would ask the dreaded "how long" question, my first reply was always the question, "is the customer an information/technology services, financial services, medical, non-profit, state government, federal-civilian or DoD/IC?" I had rule-of-thumb time-multiplier scale that each segment was slotted onto. Depending on her answer to my "who" question, I'd use that time-multiplier scale to provide the appropriate total delivery-time for any given solution/task. The scale was actually pretty ridiculous on how aggressively the time-dilation curve would slope as you moved away from the IT services end of the scale.
Wow