Thursday, September 27, 2007

"Technical" User Stories

At Agile '07, Mike Cohn was posed with a question about how to estimate user stories with more accuracy. Specifically the question was trying to get Mike to address the problem with the technical debt at iteration start up such as setting up development and build environments. Mike's answers were pretty interesting but I cannot say that I was completely satisfied. I am not going pretend that what I am going to quote here is accurate because my recollection is a bit fuzzy at best (now that it is a few months later), but I am going to go ahead and give it a shot.

One approach was to define "technical" user stories, which I will clarify that he did not advocate and I am vehemently opposed to. One of the fundamental differences between traditional requirements gathering and user stories is how technical tasks and system architecture are "designed" because in traditional environments, conceptual and physical architecture related tasks are separated from the actual feature development. Decomposing a user story into smaller "technical" stories increases the probability that features are delivered late because of the muda and muri that occurs. This kind of thinking also runs contrary to XP because the user does not receive a direct benefit after the technical story is complete.

Here is an example of what I mean. Lets say you are getting started on a new project and you have to do stuff like, install PC's, set up an SCM, and a CI environment and you choose to go in the direction of writing up some "technical" user stories. One of the stories you might come up with would be, "as a developer, I want for the build to run automatically when I check in code to the source repository so that I know that the build is healthy". Sounds like a story doesn't it? You have a user, action, and goal right? Well, unless your customer is in the business of building continuous integration tools then the answer is not much! One may simply find enough user stories to fill up 2-3 iterations before the customers sees any of their stories actualized.

To mitigate this, "technical" story based teams will give customers the "power" to prioritize these technical stories. A customer might say "we can implement the... As a tester, I want to have a QA environment... story when we are actually ready to QA the app". Developers in this scenario allow the customers to build risk into the project rather than reduce it. This is because deployment issues are dealt with too late into the project and developers are required to "crunch" until they hack together some unmanageable scripts.

Another solution that MC explored was to adjust story points to reflect the "hidden" technical debt. This means that the first story pulled out of the backlog will receive an explicit point adjustment to reflect the debt associated with the starting up a project. I did not like this answer because it runs contrary to the whole idea behind using story points. They are not meant to be accurate at first because there are too many things we do not know or understand. The team does not have the benefit of historical information to know what their velocity is going to be. Just like on "whose line is it anyway?", the points don't matter!

In the end, what I recommend is to do something that will help everyone, which at our organization we refer to as "kicking the tires". During the first iteration of your project encourage your customer to select what the team feels is the high-risk-high-value stories. Dedicate some pairs to start spiking these stories because most of the time that is throw away code anyway. Get some other pairs working on infrastructure stuff like the SCM, test environments, IDE configurations.

At the end of this iteration, one will find that the velocity is really low because of the technical debt that the story had to pay. Do not let this number discourage the team! Accept the fact that we did not know what 8 points meant before we started but now we have a better idea. That is one of the reasons XP uses velocity and story points in the first place. Customers are generally pleased if you can do for them in two weeks what it once took them years to do. Focus on what the team has accomplished by delivering value almost immediately and encourage your customer to select another set of stories to work on, before you know it the team will know its velocity and really have a sense of what 8 points actually is.

Wednesday, September 26, 2007

Ant Build Tools

One of the biggest challenges we face when doing continuous integration is the amount of time it takes to complete a build. If you find yourself waiting 10 minutes for a build to complete its probably a sign that something is wrong with your build process. Here are a few interesting tools to help working with your build, keep in mind that I have not used them all, only that I thought they sounded interesting.

Profiler
This is a good first step. I used this to help us quickly narrow down targets
http://antutility.dev.java.net/

Grand
Not too many tools out there provide a graphical representation of build dependencies. I would bet that if you used this tool, you might notice that your build looks either like a spider web or spaghetti.
http://www.ggtools.net/grand/

Ant-Junit
Eric Lee turned me on to this. I thought it looked promising.
http://www.jinspired.com/products/jxinsight/antjunittracing.html

Build anti-patterns
Interesting blog posting about poor performing ant tasks.
http://jroller.com/eu/date/20050516#profiling_and_statical_analysis_of

Saturday, September 8, 2007

Continuous Integration

One of my favorite all time blog posts about this topic is, "Continuous Integration is an Attitude, not a tool", in it James Shore describes the problems associated with using automated tools to run BVT's for CI:
This approach will lead to many more build problems and wasted time. First, you won't be running the full test suite before checking in, so the build will break more often. Second, you won't be "locking" the repository, so people will sometimes get code that doesn't work. Your test suite is likely to be slower, too, because you won't have to personally wait for the whole test suite to pass. This will allow you and your team to get away with sloppier, slower testing practices.
This is especially problematic when working with shared codebases that have a complex branching strategy and non-agile teams. Our team recently faced a huge challenge trying to stabilize our codebase and build as a result of relying on Cruise Control to exercise on Continuous Integration.

About 6 iterations into a project, our team was tired of dealing with these issues and dedicated a Friday afternoon kaizen meeting. We came up with a long list of action items to eliminate the muda related to our build but the irony of it all is that the build crashed and burned as we were leaving the meeting! This was yet another late Friday night we had to stay behind because of relying on Cruise Control!

In the spirit of kaizen, I would say that automated CI tools [are poka-yoke] that run poka-yoke artifacts [build scripts] to ensure an entire systems health. So isn't ok when CI tools catch errors in the build? Yes, but shouldn't we really be catching them before we commit? I am of the opinion that the limit of what a single build should contain is whatever is part of the bounded context. The CI tools should in fact build a system that contains many bounded contexts to ensure that it all works together, not to make sure that your commit hasn't broken some Fit tests.

Thursday, September 6, 2007

Capacity Planning Then and Now

Before I was working in an agile environment, my experience led me to believe that a Project Manager's main responsibility is resource management:
As a Junior Programmer, I learned everything I needed to know about what the system should do from my technical lead and project manager. "X has assigned you as the resource on task 27, you have 3 days to complete it"
As an experienced developer the tech lead/PM would ask me "since you are the resource on task 27, how long do you think it will take you?"
As a tech lead I actually was given a technical specification to read. After reading it, the PM asked me "who do you think we can assign to task 27?"
At most of the organizations where I'd worked, developers were considered resources on a project plan that the PM would assign to tasks. This sort of perception runs contrary to the "respect" principle as well as the "whole team" and "collective code ownership" practices.

How does it run contrary to respect if someone is thought of as a resource? Aren't all resources on software projects valuable? Not exactly. In environments with tight deadlines and few skilled developers, Project Managers often find themselves in a resource treasure hunt game. They will purposely schedule the "development phase" around the availability of that one "special" programmer who can do it all. In other instances, PMs will justify recruiting new talent for a specific project and keep all of their current hires on "maintenance" or worse.

I once worked at an organization where 80% of new work was outsourced to an ERP vendor and a large consulting company. Rather than train the existing staff, the company decided to hire and entire group of new developers to work on the new up and coming platform. Those of us who not fit a certain skill set (I imagine that was not bound to the company by a H1-B) where left doing maintenance. One Monday morning after I came back from a well deserved vacation, I arrived to a bunch of empty cubicles. 50% of our team was let go. The director of architecture said "we had to get rid of some dead weight". Is that respectful?

To address my second point of the XP practices, I think that it goes without saying that "collective code ownership" is about maintaining a constant flow of knowledge transfer. At organizations where developers are resources there is also a common practice of "module ownership" and that has proven to keep the truck number very low.

In addition to increasing risk, this managing developers as resources thinking also lowers moral. I mean how would you feel if you were passed up on a project that you really wanted to be on? Or that you were not assigned to the task you wished you could do? When the whole team participates, one is given the opportunity to choose the work one wants to do!

This proves itself to me every time a new developer joins our team and they are astonished that they are able to sign up for tasks as well as being able to design the system! I almost forgot what that was like until recently when a new developer was integrated into our team as part of a cross pollination effort. He knew that we were doing something the wrong way and hesitated to tell the team.

In passing, I overheard him complain about how the organization did not really take advantage of the more advanced features of Spring such as transaction management. I immediately jumped at the opportunity and said, "I am glad to see that you are familiar with Spring, it was just what our team needs. By all means, refactor the code into something simpler and get rid of anything you see that is unnecessary". The look that he gave me was priceless. It was a look of bewilderment... why? Because no one ever valued his knowledge or opinion. That's all it took, just a little respect and making him a true part of our team. Now I know how Spring XA works and I did not even have to read a book on it ;-)