Styles

Saturday, March 5, 2011

Six Sigma for programmers

There seems to be a lot of confusion among some IT professionals as to what exactly Six Sigma refers to.  Part of the confusion seems to come from the fact that it is usually used in manufacturing, and part of the confusion seems to come from the belts that are associated with it.  I won't get into a detailed statistical explanation of what Six Sigma is, but I can at least give an overview of what it is and why IT professionals should care.

To start, those that came up with the Six Sigma idea believed that in order to achieve high quality, one must first achieve consistency.  The steps to achieve this consistency, often known by their acronym "DMAIC", are as follows:
  • Define - One must be able to define exactly what success looks like in order to determine if it has been achieved.  Otherwise two people will look at the same results and will likely have different opinions on whether it was a success or not.
  • Measure - Unquantifiable improvements can lead to ambiguity and disagreement.  Therefore some objective way to measure change must be achieved.  (If you're curious, the term "Six Sigma" comes from the number of defects measured in any given process.)
  • Analyze - One must determine the causes of the inconsistencies in order to fix them.
  • Improve - Being able to find the cause of problems doesn't do much good unless one can fix them.
  • Control - We are all creatures of habit, and unless there is some driving force for permanent change, we will likely fall into old (bad) habits.
In order to truly use Six Sigma as it was meant to be used, one must limit the number of defects to 3.4 per 1,000,000 opportunities.  To be honest, I have not yet figured out a way to apply Six Sigma standards to software development.  Software quality is tough to define.  You can focus on number of unhandled exceptions (or application crashes), but this may encourage behaviors that are not beneficial to the project as a whole.  You can focus on keeping application speed down, but I could easily imagine scenarios where one might choose to allow for less-than-ideal speeds for increased functionality.  Focusing on number of met requirements may be beneficial, but controlling such a process would be extremely difficult.  Using Six Sigma to control code quality may be more appropriate, but I think many developers already focus too much on code quality and not enough on the other aspects of software development.  (Though that's a topic for another blog.)  

Despite the fact that it may be too difficult to apply Six Sigma standards to software development, it is certainly not to difficult to apply Six Sigma methods.  Without defining and measuring your problem, scope creep and confusion will reign.  The benefits to the last three steps should be self-evident.

How would a programmer use this to his or her advantage?  Let's start with the performance example.  If someone wanted their web site to be "fast", we should define and measure how fast that is.  1 second per page load?  3 seconds?  Keep the page load time under 5 seconds for those with 56K modem connections?  Next, for the pages that are too slow, analyze the performance issues to determine the cause.  Do the pages have too much information?  Is the database connection too slow?  Is the business logic poorly written?  Are there too many business rules being run getting in the way?  Once the problem has been determined, improve the situation by fixing the specific problem.  (Since improving performance could be a book in itself, I'll hold on that for now.)  Finally, be sure to control the process to make sure that slow pages are fixed before being pushed to production, perhaps by running automated tests.  Be sure to document any exceptions you may wish to make to this rule, or break up the page into smaller ones in order to achieve the same functionality.

No comments:

Post a Comment