At Agile '07, Mike Cohn was posed with a question about how to estimate user stories with more accuracy. Specifically the question was trying to get Mike to address the problem with the technical debt at iteration start up such as setting up development and build environments. Mike's answers were pretty interesting but I cannot say that I was completely satisfied. I am not going pretend that what I am going to quote here is accurate because my recollection is a bit fuzzy at best (now that it is a few months later), but I am going to go ahead and give it a shot.
One approach was to define "technical" user stories, which I will clarify that he did not advocate and I am vehemently opposed to. One of the fundamental differences between traditional requirements gathering and user stories is how technical tasks and system architecture are "designed" because in traditional environments, conceptual and physical architecture related tasks are separated from the actual feature development. Decomposing a user story into smaller "technical" stories increases the probability that features are delivered late because of the muda and muri that occurs. This kind of thinking also runs contrary to XP because the user does not receive a direct benefit after the technical story is complete.
Here is an example of what I mean. Lets say you are getting started on a new project and you have to do stuff like, install PC's, set up an SCM, and a CI environment and you choose to go in the direction of writing up some "technical" user stories. One of the stories you might come up with would be, "as a developer, I want for the build to run automatically when I check in code to the source repository so that I know that the build is healthy". Sounds like a story doesn't it? You have a user, action, and goal right? Well, unless your customer is in the business of building continuous integration tools then the answer is not much! One may simply find enough user stories to fill up 2-3 iterations before the customers sees any of their stories actualized.
To mitigate this, "technical" story based teams will give customers the "power" to prioritize these technical stories. A customer might say "we can implement the... As a tester, I want to have a QA environment... story when we are actually ready to QA the app". Developers in this scenario allow the customers to build risk into the project rather than reduce it. This is because deployment issues are dealt with too late into the project and developers are required to "crunch" until they hack together some unmanageable scripts.
Another solution that MC explored was to adjust story points to reflect the "hidden" technical debt. This means that the first story pulled out of the backlog will receive an explicit point adjustment to reflect the debt associated with the starting up a project. I did not like this answer because it runs contrary to the whole idea behind using story points. They are not meant to be accurate at first because there are too many things we do not know or understand. The team does not have the benefit of historical information to know what their velocity is going to be. Just like on "whose line is it anyway?", the points don't matter!
In the end, what I recommend is to do something that will help everyone, which at our organization we refer to as "kicking the tires". During the first iteration of your project encourage your customer to select what the team feels is the high-risk-high-value stories. Dedicate some pairs to start spiking these stories because most of the time that is throw away code anyway. Get some other pairs working on infrastructure stuff like the SCM, test environments, IDE configurations.
At the end of this iteration, one will find that the velocity is really low because of the technical debt that the story had to pay. Do not let this number discourage the team! Accept the fact that we did not know what 8 points meant before we started but now we have a better idea. That is one of the reasons XP uses velocity and story points in the first place. Customers are generally pleased if you can do for them in two weeks what it once took them years to do. Focus on what the team has accomplished by delivering value almost immediately and encourage your customer to select another set of stories to work on, before you know it the team will know its velocity and really have a sense of what 8 points actually is.
2 comments:
What if you were to explicitly tie a deliverable to the "technical" stories of setting up the Continuous Integration Server and the QA test environment? For example, the CI story might be phrased as such: "Project status reports will be posted to a website generated by the CI server. The report on this website will include a clear indication of whether the product distribution was created successfully, a log of source code changes in the SCM since the previous build, and the results of automated test execution". Similarly, the deliverable from the QA environment could be the product distribution that has been built and packaged to run in a particular environment -- say the Mac or Solaris binaries. I guess it comes down to these questions: should there be a story for how a user gets the end product, and should there be a story for reporting on the health of the product that the user is getting?
Tim,
Thanks for your comments.
should there be a story for how a user gets the end product, and should there be a story for reporting on the health of the product that the user is getting?
If the project's end product is a CI server, then yes. A sample user story would probably be:
"As a developer, I want to install the server on multiple operating systems, so that we can run integration builds in various environments"
However, if the projects end product is not a CI server, then I must respond with a "no" to both questions. IMO, reporting the health of the project are both at the core of XP's principles and practices. Feedback, Communication, TDD and CI. Teams who do not follow these practices are just not doing XP. The purpose of reporting a distributions health to the customer is to let them know if the team is closer to being "done".
The example that you provided sounded more like an implementation detail of a requirement for a user story. I say that because a user story card should only have three elements:
- role, action, and goal
"As a frequent flyer, I want to rebook a past trip, so that I save time booking trips I save often"
According to Ron Jeffries:
the team releases running, tested software, delivering business value chosen by the Customer, every iteration.
Separating out a user story for reporting the results of the nightly build just does not provide immediate business value. No matter how hard one tries, the end user simply cannot purchase a flight with the JUnit report output.
Post a Comment