Styles

Sunday, May 31, 2015

When organizational debt slows software projects down

Note: this post was originally published on InfoWorld.com.

Most companies, when embarking on a major software project, have a fairly good idea what they need to accomplish from a high level. In many cases, though, the business does not know the details surrounding the software project well enough to make it truly a success using meaningful measurements. If you make software for a living, eliciting these details is a normal and expected part of your software development process. But there is one problem associated with software projects that too often gets neglected, but fixing it is a vital part of delivering your software product successfully. I call it organizational debt. To explain what it is and why we should care, I will describe it by drawing parallels to the better-understood problem of technical debt.

What is technical debt?

Technical debt is the term for issues embedded within the code of the software that make it harder to maintain. These issues may not immediately be apparent to a user, but instead are visible to the development team. Examples of technical debt are:
  • Different developers working on the project each had their own coding style, making it difficult for someone new to make sense of the whole product.
  • The development team put in a complex bit of code that seemed like a good idea at the time, but in retrospect turned out to be a bad idea because it didn't address the root issue. Now no one understands what it is for and therefore everyone just hopes it doesn't break.
  • Because of a particular time crunch, the team makes a conscious decision to ignore best practices in order to get a product out more quickly (with the intention of fixing issues later).
Regardless of the cause behind the creation of technical debt, the result is the same – software that is difficult to update without risking breakage in other parts of the system.

How does that compare to organizational debt?

The parallel to technical debt on a company-wide level would be "organizational debt." If technical debt is issues embedded within software that hamper its maintenance, organizational debt would be issues in the day-to-day running of the company that prevent it from operating smoothly. Examples of such debt might be:
  • Different departments have their own tools and methodologies to address the same problems, making it difficult for executives to see similarities in order to address company-wide issues.
  • Managers create processes or implement software solutions that seemed like a good idea at the time, but didn't address the root cause of the issue and end up creating more problems in the long run.
  • Because of a particular time crunch, the team decides to complete a task in a less-than-ideal manner "this time". But that manner is repeated in subsequent tries because no one remembers that the first time was intended to be a one-off situation.
It is important to note that organizational debt doesn't prevent the company from getting work done. Instead, organizational debt prevents a company from getting work done efficiently and effectively.

How does that impede the progress of a software project?

Companies that have a low amount of organizational debt do so because they have a culture that eliminates it on a regular basis. Conversely, companies that have a high amount of organizational debt do so because their culture has trouble distinguishing between value-adding activities and mere busywork. As a result, when they go to build or implement new software, they not only try to implement all of the processes that existed previously, they try to implement all of those processes at once. After all, if they were there before, aren't they vital to the future success of the company?

What can software teams do about organizational debt?

Unfortunately for many development teams, the answer to this question is typically "if that's what the business wants, that's what the business gets." Such thinking leads companies to turn a tangled mess in an old system into a tangled mess in the new one. But smart technology leaders will use the new software implementation as an excuse to create something better. How is that done?
  1. Start by reading up on various change management techniques. John Kotter's approach is a good one, but there are others out there. Remember, you're never going to get the technology right if the business isn't ready for it.
  2. Always look beyond requests to understand the underlying business value. Business leaders used to creating quick fixes will continue to think in terms of quick fixes for some time. Getting at the underlying business value will help you and the business leader determine what the right solution is.
  3. Look for opportunities for consistency. Almost all organizational debt I've witnessed arose because of small-scale solutions to small-scale problems, so encouraging the business leaders to look at the big picture should help.
  4. Keep in mind what the big picture is, but focus on small changes. Without the ideal final solution in mind, it will be difficult to keep the project on track. But by delivering changes in small increments, you'll decrease the anxiety levels of the people in the organization who are most resistant to change.
Creating software for companies with high amounts of organizational debt is frustrating because nearly everything is more difficult than a company with relatively little debt. But patience and consistency (usually) win in the end.

Sunday, May 24, 2015

Outsourcing a software project? Pay time and materials.

Note: this post was originally published on InfoWorld.com.

Companies that look to hire an outside firm to help them create or deploy new software often will look to pay for that project using a fixed price agreed upon at the outset. This is so they can reduce the perceived risk associated with the project. This is usually the wrong thing to do, though, if you care about creating the best product. To find out why, we first need to examine why software goes wrong in the first place.

Software is almost never right the first time

As I have written before, knowing what to put in your software is more important than hiring great developers in getting your software completed successfully. Yet the industry spends more time finding ways to create better code, not create better solutions. Furthermore, business stakeholders aren't in a position to help. They typically think that they know what they want, and very often think they know how to turn what they think they want into working software. But in reality, knowing how to turn what is usually a failing process into successful software requires knowledge of both the business and of creating software.

On top of that, business needs usually change. Businesses never stand still. New competitors emerge, businesses grow, new opportunities emerge, and so on. Even if you know what you need when you launch your software project, that might be different than what you need when the software gets delivered. But sometimes the change is caused by the software itself. As individuals see what is possible to achieve with the new software, they get more ideas as to what they want the software to do.

Getting software right means small, frequent deliveries

To combat all these problems, successful software teams deliver their product in small, frequent deployments. As more of the software gets completed and delivered, the team and the business stakeholders both gain a better understanding of what the need is and the possibilities for solutions. Mistakes are caught and fixed early (when they are relatively inexpensive to fix).

What does all that have to do with how one pays for a project?

There are two reasons why this affects your method for paying for a project:

The biggest reason cited for choosing to pay a fixed fee for a new software project is to reduce your risk. However, your most important risk is not failing to meet an arbitrary time and budget, but that your software won't meet your needs. If your goal is to minimize your risk, minimize the risks that matter most to your business. For almost every business, a truly successful delivery requires a process that is adaptable to change. But fixed-fee projects, almost by definition, entail a fixed scope and so do not easily accommodate such flexibility.

The second reason for choosing to pay for new software via time and materials rather than a fixed bid is that nothing prioritizes what's most important like being forced to put a dollar amount to each request. In the initial stages of the project, when the delivery team and the business stakeholders are still getting to know each other and the project, it's impossible for any individual to be able to understand the true priorities. Business stakeholders almost always want everything they can think of. On fixed bid projects, trying to deliver everything can derail the whole project. Instead, being forced to pay for each feature individually keeps the business team focused and the project on track.

If you're still not convinced

On nearly all of the fixed-bid projects I've been on, the communications between the delivery team and the business stakeholders were dominated by arguments about whether particular requests were in scope or not. On time and materials projects, we focused on determining whether it was worth putting in new requests. In other words, the delivery team needed to keep control of scope to protect our financial interests on fixed-bid projects. On time-and-materials projects we merely focused on doing what was right. Which approach would you take? I know which I'd choose.

Sunday, May 17, 2015

Does software craftsmanship make project success harder to attain?

Note: this post was originally published on InfoWorld.com.

There's a relatively new movement among software developers called software craftsmanship that focuses on improving the practices of software developers. On the surface, it seems like a good movement. After all, given the visible failures of software (the Affordable Care Act website failure and the Toyota accelerator issuescome to mind), it is easy to blame the developers. But it would be hard for me to depend on a software craftsmanship advocate to deliver a software project successfully. To see why, I'd like to tell you about my first career.

Life as a flute repairman

When I was getting my undergraduate degree in music, I kept hearing how there weren't enough band instrument repairmen around, and those that were there weren't very good. Since there weren't many jobs available to musicians, I chose to be an instrument repairman, focusing on being the best that I could be. I quickly discovered that most musical instruments were shoddily made, and that if I wanted to bring out the best in them I needed to do a lot of work to get them to play well. As I gained experience, I found ways to be more efficient (and therefore more profitable to the music store), but I used those efficiencies to look for ways to make better repairs, not ways to do more of them.

One day, I realized two things:
  1. My focus on making great repairs succeeded. I think that my repair abilities would have compared favorably to nationally-respected individuals. But I wasn't profitable. I'd put $600 of repairs into an instrument that cost the store about half that. That isn't a sustainable business model.
  2. Many of the repairs I did made the instruments incredibly responsive and easy to play, but those same repairs made the instruments less tolerant to misalignments. That's fantastic for a professional player who can tell the difference, but a questionable thing to do for beginning students who can't tell the difference between a problematic instrument and poor playing.
Try as I might, though, I could not bring myself to lower my standards so I could achieve greater profitability or make instruments more tolerant of misalignments. So I left the industry and became a Web developer.

What does that have to do with software development?

If I were to be completely honest, if I were to pick up a flute now, nearly 10 years after I left the music industry, I'd still be unwilling to find the appropriate cost/quality/maintainability balance appropriate for each situation. I'd still want to do the best job possible, making the same repairs on a $10,000 flute and a $600 one. But as a software developer, I am not tied to a self-defined set of criteria for high quality. Between quality, cost-of-creation, long-term maintainability, and time-to-market, there is no single correct balance for software projects. The balance I'd choose for a short-term marketing project is very different from the balance I'd choose for a mission-critical project for a nuclear power plant.

Where software craftsmanship comes in

When I look at the software craftsmanship movement from a high level, I see many good ideas. Like I was 15 years ago, they are focused on turning around the perceptions of a failing industry. But I can say from personal experience that such a singular focus can make it impossible to see the bigger picture. Just like I was unable to change my approach for the situation because it felt like I was lowering my standards, most software craftsmanship practitioners focus on what's "right," regardless of the actual circumstance. As a representative example, automated unit testing is definitely a good thing, but when taken to the level of full Test-Driven Development espoused by software craftsmen, you end up with a tangled, unmanageable mess. And I think it's no coincidence that a large number of people leading the "No Estimates" movement are also sympathetic to the software craftsmanship movement.

Software craftsmen and delivering projects successfully

It should come as no surprise that software developers who are most passionate about doing the best job they possibly can are often the best developers on a team. But if your best developer is a software craftsman, you must resist the temptation to put that person in charge of the development team as a whole for two reasons:
  1. To deliver the project successfully, your team leader must understand when high quality is vital to success and when additional quality adds unnecessary costs and/or delays.
  2. To get the most out of your software product, you need a software leader who understands the business need well enough to be able to anticipate problems and suggest solutions. Someone hyper-focused on their own part of the process typically isn't able to do this well.
Yes, this presents a management and leadership challenge by putting a seemingly inferior developer in charge of the development effort. But if you think about it, that would be the best approach for everyone. It gives the software craftsman the opportunity to focus on what he/she is clearly most passionate about but allows the team to direct that passion for the betterment of the project.

Sunday, May 10, 2015

How just about everyone gets unit testing wrong

Note: this post was originally published on InfoWorld.com.

One of the biggest ways that people could leverage technologies more effectively is to use unit testing correctly. Most teams either don't utilize unit testing at all or use it far too much -- it's tough to find that "sweet spot" where the tests increase quality without hindering productivity. But if you're able to achieve that balance, you should be able to enjoy higher quality software with a lower cost of creation.

What is unit testing?

Before I go too much further, I feel like I should explain what "unit testing" actually is, because the term is misused quite frequently. Unit testing is the act of testing a small component, or unit, of your software application. Because the scope of each individual unit test is so limited, the only way to achieve it is to write code that tests your code, usually using a framework like NUnit or the Microsoft Testing Framework. A detailed description of how it works is out of the scope of today's post, but in a nutshell, unit testing is when a developer writes a test method that calls "real" code and lets him or her know when the actual results don't match the expected results.

Confusingly, many developers who are unfamiliar with these testing frameworks refer to the manual testing they do as "unit testing." That isn't "unit testing" -- that's just "testing".

Why in the world would I write code to test code?

To someone who isn't a software developer, the idea of writing code to test code may seem rather silly. But for those of us who actually do it, the benefits are easy to see:
  1. During a typical test of a system, you have to log in and perform a specific set of actions in order to test particular functionality. This is incredibly inefficient and time consuming. Unit testing allows the developer to perform specific, targeted testing on the area in question.
  2. When something does go wrong, the development team doesn't need to look in the entire system for the source of the bug. They can run all of the previously-created unit tests and narrow down their search.
  3. Finally, as I mentioned last week, rewriting/refactoring code periodically is vitally important for the long-term health of your system. Rerunning all of the unit tests is a great way to help ensure that you didn't break anything in the rewrite.

When unit testing can be taken too far

Most of my experience with software developers is that they tend to think of things in terms of right or wrong. If it's right to write unit tests, then you must write unit tests for everything you do, right? Here are two unit testing beliefs that can cause your project more harm than good.

Test Driven Development (TDD)

The idea behind Test Driven Development is that you write your unit test before you write your product code. You then write product code to make the test pass. If you need to add or change the functionality, you change the tests first and continue making fixes until all of your tests pass. This is a nice idea, but a good chunk of the typical developer's code just doesn't need to be unit tested. Complex business logic absolutely needs to have corresponding unit tests. But writing unit tests for simple logic will require the developer to spend more time writing tests than delivering value to the business.

100% Code Coverage

One common metric that software teams track is code coverage, i.e. what percentage of the code written for the product is tested by a unit test. Many software development managers believe that 100% code coverage is necessary to ensure that the code is tested adequately. Code that is very highly tested is very tough to change. If unit tests are used excessively, software teams will find themselves considering the costs of changing the existing unit tests when changing the code, and these costs can spiral out of control.

So what is the right balance?

Unfortunately there are no hard-and-fast rules to know what unit tests should be written, but here are some guidelines that I follow.

Consider writing unit tests when:
  • When the logic behind the method is complex enough that you feel you need to test extensively to verify that it works.
  • When a particular code function breaks and it takes longer than a minute or so to fix it.
  • Whenever it takes less time to write a unit test to verify that code works than to start up the system, log in, recreate your scenario, etc.
Consider avoiding unit tests when:
  • When elaborate frameworks need to be created or installed (such as mock objects and dependency injection) just to get the tests to work.
  • When the tests are applied to code that, if broken, has very little bearing whatsoever on the overall software quality.
  • When the costs of maintaining the set of tests are higher than the costs of maintaining the actual product code.
To summarize, unit tests are intended to help development teams reduce costs by reducing testing time, reducing the need for regression tests, and making much-needed maintenance easier. Writing unit tests is absolutely the right thing to do if you want your software project to be a success. However, development teams that find themselves maintaining large libraries of tests are actually causing many of the problems that unit testing was meant to solve.

Sunday, May 3, 2015

Making a business case for refactoring code

Note: this post was originally published on InfoWorld.com.

One common experience many companies have in the course of supporting software products is that the time and effort required to make customizations vary as the product ages. At first, customizations are easy because new features either mesh nicely with the existing code structure or they are completely new and can be easily added to the existing code. But as the product ages, changes affect other parts of the system, slowing down development and causing problems in seemingly unrelated parts of the product. The good news is that this isn’t an inevitable result of creating software. You can get around many of these problems by regularly refactoring code.

What is refactoring code?

Refactoring code is the act of changing existing code to be more understandable. Though it is possible to add functionality or fix bugs while refactoring, the act of refactoring code is a separate process from these activities.

Why in tarnation would you want to do that?

Think of writing code like writing a book. At first, a writer just needs to get his or her thoughts down. The book would probably be understandable after this first pass, but the reader would have to take more time to sort through the writer’s ideas. To help combat this, the writer will re-read, and in many cases, rewrite portions of the book. The idea is not to change the content of the book, but to make it more understandable to others. An editor will then read and change the book to further clarify the ideas it contains.

Many software products are shipped with code in the pre-edited state, in large part because the product’s users aren’t affected (at least not immediately) by the quality of the underlying code. This is usually a mistake, though, for the following reasons:
  1. In most cases, other developers need to read the code to make fixes and improvements. This is significantly easier (therefore cheaper and faster) to accomplish if the code has been refactored/edited.
  2. Debugging code is harder to do than writing code in the first place. Anything a developer can do to make the code easier to read in the future will save effort in the long run.
In short, code that has been refactored is easier to understand, therefore easier to extend and debug when enhancements are needed in the future.

But the business stakeholder wants their change put in now

There are times when it is appropriate to hold off on refactoring in order to put more features in. These times should be rare and temporary. In most cases, if you explain to each stakeholder what you are trying to do and why, focusing on why the delay benefits them (which again is that you’re building a foundation to help make the system more easily maintainable in the future, lowering time and cost in the long run), you’ll get agreement.

There have been times in my career when this hasn’t been enough to persuade the stakeholder; it was always in situations where the development team had consistently over-promised and under-delivered on software quality. In these cases, the development team needs to rebuild trust, and one way to do this is to communicate how refactoring will improve the reliability of the process and the quality of the product.

Many developers want to refactor everything. Should you let them?

Some developers, especially the better ones, tend to want to refactor everything. This isn’t always the appropriate course of action. If you remember the purpose of refactoring, which is to make code easier to maintain in the long run, it shouldn’t be hard to see why. Some code simply will never be touched again in its lifetime. Why spend the time polishing code that isn’t broken and will never be changed? With that said, though, I usually recommend refactoring code sooner rather than later, while the purpose of the code is still fresh in the developers’ minds.

How can you help prevent refactoring from breaking the software?

After the cost, the second most common concern I’ve encountered about refactoring code is the risk of introducing breaking changes to the software. No, end users generally are not understanding when something that worked is suddenly broken. Why risk making a change? Fortunately, automated unit testing is a fantastic way of mitigating this risk, but a further exploration of that concept will have to wait for my next post.