Styles

Sunday, July 27, 2014

Is a project manager the right person to put in charge of your software project?

Last month, I wrote about the differences between leadership and management, specifically how companies or projects that have management without leadership are unresponsive to change and how companies or projects that have leadership with no management are chaotic and unpredictable. Today, instead of the management-centric model of the late 20th century, more and more software projects are delivered via Agile, whose goals are more about being responsive to change than hitting an arbitrary budget. In this new world of embracing change, is a project manager the right person to be leading projects?

Before I can answer that question, I need to introduce you to two people I've worked with over my career. Both of them had the title "project manager", but as you'll see, each had very different jobs. I have, of course, changed their names in this blog to protect their true identities.

Before every project, "Beth" made sure that she understood what needed to be done, who needed to do it, and why it needed to be done. Years of asking these questions gave Beth a pretty thorough understanding of how all the parts of our process worked, even if she didn't understand the specifics of how we did our jobs. When we were planning our projects, she would often tell us what we could reasonably expect from other groups, and what could be possible if things went wrong. When things did go wrong, she would check to see if we could stay on schedule, and would either warn groups responsible for later phases of the project that they might be pressed for time or would inform stakeholders early that a schedule change might be necessary. This allowed everyone to make the adjustments necessary to make the project run smoothly.

"Mike" was also a very organized person. At the beginning of every project, he would ask for all the steps that needed to be accomplished to get a project completed, and he would put each of those steps into a spreadsheet. Mike was great at scheduling meetings and getting our questions answered. He didn't really understand what we did, though. We had frequent meetings that had no purpose other than for us to keep him in the loop as to the changes in the project, which for the rest of us was a waste of time. He also could not tell the difference between issues that were merely annoyances and the ones that were truly problems, so he treated them all with the same urgency. Mike was very good, however, at keeping everyone in the loop about the status of the project at any given time.

Yes, Beth is a better project manager than Mike. But Mike met the expectations of the company we were both working for at the time. Why? He was able to schedule meetings, facilitate communication, run reports, and inform the right stakeholders what the status of the project was. But since a true manager should help make processes run more smoothly and remove roadblocks for the team, Mike couldn't be called a manager. Thanks to one of my MBA instructors, I have a term for Mike's job, and it is a project administrator.

So circling back to the question of who really should be leading projects, I'd point out that neither Beth nor Mike led every aspect of their projects. IT personnel often forget about the fact that while software delivery is important, so is managing the other changes (whether culture changes for internal software or marketing efforts for external software) necessary for project success. Both Mike and Beth focused only on the implementation and delivery. So regardless of whether you have a project manager or project administrator, you need to have a leader preparing the business for the change. (I have more to say on that subject, but it will have to wait for another blog post.)

Is it appropriate to have a project manager lead the implementation portion? If your project manager is really a project administrator with a different title, then no, absolutely not. Most project administrators I've met don't really manage or lead - they merely create reports. Projects that aren't managed or led very well will almost certainly fail. When all you have is a project administrator, you must hope that you have a strong software architect that will assume the responsibility for leading the project. If so, the project still won't be run effectively or efficiently, but it will get done.

What about a true project manager? Keep in mind that a project manager's job is to manage a project, not to lead it. Unless the project manager has either deep subject matter expertise in the content of the software or significant experience with delivery, he or she will be unable to truly lead the project. It is common in these scenarios to ask the primary business stakeholder to lead the project through its inevitable issues. It is possible for a team to manage itself well through process and communication, but describing how that would work will have to wait until next month.

July: Is a project manager the right person to put in charge of your software project?
October: Getting programmers to see the value in management
November: The Peter Principle and software architects
December: Creating a leadership team to make your software project a success

Sunday, July 20, 2014

Why Prevalant Age Discrimination in IT is (partly) a Myth

I keep reading articles online about age discrimination in IT. This surprises me, because I've worked at several companies and have been involved in dozens of interviews, but I've never seen anything that was clearly and obviously age discrimination. I'm not saying that it doesn't exist, since even if you ignore Mark Zuckerberg's comments about hiring older workers, clearly there is a large number of individuals with a lot of experience that are having trouble getting hired. But from what I see, this isn't primarily due to age discrimination. Instead, employers have defined their hiring processes in such a way that happens to favor younger workers. That's not deliberate age discrimination, it's just stupidity.

What do I mean by this? Essentially, employers primarily look for candidates with specific skills, and experienced workers want to be hired based on their superior knowledge. To see the difference between these two concepts, here is a comparison:

Skill Knowledge
Is typically gained through training or reading a book Is typically gained through experience
Is easily demonstrated through tests and certifications Is tough to validate through the typical job interview process
Will almost certainly become obsolete with time Will almost certainly not become obsolete with time
Can easily be tested during an interview screening process Is tougher to test during an interview screening process
Helps an employee solve a specific, well-defined problem Helps an employee anticipate and prevent unspecific or unforeseeable problems
Only somewhat transferable to other technologies/languages/situations Very transferable to other technologies/languages/situations

In other words, most employers look for particular technologies on a candidate's resume, not wanting to train a new employee in a particular technology or technique. Most experienced (but unemployed) candidates know that they can pick up any particular technology fairly easily, but don't want to learn a "flavor of the month" skill without seeing the value. (Employers are sometimes willing to train skills, but only for relatively young people with low salaries to match their limited experience.)

The result is that employers have job openings for IT personnel that go unfilled because they pass up candidates that could do the job, but don't have the right qualifications on paper. It also means that there are IT personnel with the qualifications to work in the workforce who choose not to put in the work necessary to get hired.

So this helps prove Hanlon's Razor: Never attribute to malice that which is adequately explained by stupidity.

Sunday, July 13, 2014

IT as a Service Provider May Be Bad for Business

This is the third in a series of eight posts that examines the relationship between business and IT. Based on an article examining manufacturing, I talked about four different levels (or stages) IT can interact with the rest of the business:
  • Stage 1: Internally Neutral (Minimize IT’s Negative Potential)
  • Stage 2: Externally Neutral (Parity with Competitors or IT as a Service Provider)
  • Stage 3: Internally Supportive (Provides Credible Support to the Business Strategy)
  • Stage 4: Externally Supportive (Provides an Important Contribution to the Competitive Success of the Organization)
Last month, I talked about how Stages 1 and 3 were not viable long-term stages for businesses in the 21st century, though for very different reasons. By definition, Stage 1 companies either have dysfunctional IT departments or dysfunctional relationships with their IT department (or both), where Stage 3 companies will naturally move to either Stage 2 or Stage 4. This month I'll discuss Stage 2.

In both Wheelright's and Hayes' Stage 2 and the concept of IT as a service provider, the primary goal of both business and technology would be to put the business (i.e. technology consumers) in the driver’s seat in determining how and where technology would be used. When the business leadership had identified a new need, technology providers would be tasked with creating estimates for costs and time frames. The providers would be responsible for ensuring that the business would have all options at their fingertips, including the benefits and drawbacks of each option. The final decision would rest with the technology consumer, as well as the final responsibility of the success or failure of each decision.

The business would approve or reject projects based on the projected ROI based on the original estimates (or in the case of many small companies, the gut feeling of the decision-makers). Since the business success of technology products would lie in the hands of the business leadership, the technology producers would measure their project success merely by ensuring that the project is completed within the promised time and budget, instead of metrics useful to the business.

Why would a company choose this model?

Stage 2, or treating IT merely as a service provider, is comfortable for both business and IT. Business, thinking that most technologists don't truly understand the business and how it works, is able to control technology spending and is able to set the priorities of the technology department. IT on the other hand gets to focus on the technology aspects of the project and can relinquish responsibility for the business-related success of the project.

Why is this bad for business?

Much to the chagrin of most business and IT professionals I've met, who feel much more comfortable staying in their specialty areas, the best business decisions related to IT are nearly always made when all of the important strengths, weaknesses, threats, and opportunities that are either technology- or business-related are factored into the decision. When IT is acting merely as a service provider, they are not giving input into the direction a company should go, either on a micro-level, such as how to fix a particular business problem, or on a macro-level, such as how to stay relevant in a market with several new competitors. As a result, the company will miss out on technology-related opportunities and will not leverage existing technology-related strengths. Few companies wishing to be best-in-class can afford to miss out on these opportunities.

Is this a failed business model?

Plenty of companies that are in business today could survive with this business model, assuming one of two things are true:
  1. The company is satisfied with being a niche player in their market
  2. The company as a whole is being carried by one particular are, such as a world-class product, sales department, etc.
So unlike Stage 1, to say that intentionally setting up IT as a Stage 2 environment or limiting IT to being a service provider is setting up the company for failure is a bit strong. Clearly companies can and do survive limiting IT's role. However, these companies are finding it increasingly difficult to do more than just survive in today's technology-infused world, and this will only get harder as time goes on.

Other posts in this series:
July: IT as a Service Provider May Be Bad for Business
November: How your Stage would affect your hiring practices
December: How can an organization reach Stage 4?

Sunday, July 6, 2014

Why making a software system 100% secure is impossible

It seems that one cannot look at the news these days without seeing another story about a company in trouble because of a data breach. In some cases, C-level executives at Fortune 500 firms lose their jobs as a result. But there's something that gets lost in these stories: IT security is like car safety in that one should always try to improve, but the inherent risks mean that complete safety is impossible. In other words, it is impossible to create a system that is 100% secure from malicious attacks

Why 100% safety is impossible

If you truly wanted to make your data as secure as possible, you'd back it up on a hard disk, disconnect it from any computer, put it in a safe, then bury the safe in an unknown location. Only then could you rest assured that the information was safe from hackers or rogue employees. Unfortunately, in order to derive benefit from the information, people need to be able to access it. Therefore, compromises need to be made. Someone must be able to use that information, or else it might as well not exist at all. But anytime you give any access to a component of the IT system, you create the possibility of one of these scenarios coming to fruition:
  1. A hacker steals the credentials of a trusted user and accesses the system
  2. A hacker "listens" to the conversation between your employee's computer and the data
  3. A hacker exploits a previously-unknown vulnerability in software you're using to access private data
  4. A rogue employee sells his/her credentials and/or private data to malicious users
So there is a constant balance IT has to make when securing a system: what can we do to maximize the usefulness of the information while minimizing the downside of that information getting into the wrong hands?

This is not unlike the balance an automobile manufacturer has to strike in order to make a car safe, affordable, and desirable at the same time. If the manufacturer only focused on safety, their cars would have so much steel that they would have horrible gas mileage, little visibility to minimize the glass in the vehicle, and the inability to go out on roads where semi trucks might be barreling at the car head-on at 90 MPH. A car that can't go out on a road isn't very useful, is it? As a result, car manufacturers must find ways to make cars safer knowing that they aren't ever 100% safe.

Why IT can't catch everything

The average business user at this point is probably saying "why can't you lock down each component of the system, only letting in people that you trust"? First, up to 60% of security breaches come from rogue employees. Companies have to be able to trust someone to setup and maintain these systems, but when that person betrays that trust bad things happen.

Second, computers are incredibly complex things, and the consequences of actions cannot be understood by even the best IT people. For example, in the Target breach I linked to earlier, hackers used a vulnerability in a third-party system to get onto the Target network, then exploited a well-known method for stealing information from databases to get at customer information. While both issues should have been addressed - each is understandable on its own. The third-party vendor thought its credentials were secure and the database administrator thought that the database was protected behind the network. It was the combination of errors that allowed hackers to steal customer information.

Third, even systems built to help secure your network can have vulnerabilities. As a recent example, OpenSSL is relied upon by millions of people (directly or indirectly) to help keep information safe and secure. But programmers found a vulnerability in the system, and since have found several other issues. Any IT professional depending on OpenSSL to help secure their system found themselves scrambling to patch problems when the Heartbleed problems became public knowledge. But if the security systems themselves are causing a security concern, how is an IT professional expected to make a system 100% secure?

Will all that can go wrong, sometimes it's amazing that more breaches don't happen.

So should we just accept that breaches will happen?

No. Companies should not merely accept that breaches will happen and shirk their responsibilities to their customers to protect information appropriately. Companies can and should take reasonable precautions to ensure that information is protected. Using Target as an example again, its programmers should have known to protect its internal databases from the method the hackers used just out of habit, regardless of any mistakes the third-party made. Target should have known that its information would be desired by all sorts of malicious users and should have taken appropriate steps.

Why security is also a business problem, not just an IT problem

When talking with business users with limited knowledge of IT about security, IT professionals I've talked to generally get responses ranging from "use your best judgment" to "that's your problem, not mine". But determining the appropriate balance between costs and benefits to mitigating security risks is a business problem, not an IT one. One can always spend more time and money hunting down and eliminating security vulnerabilities, but how much time and effort is worth the cost when 60% of breaches come from employees? There is no right answer here, especially since the appropriate balance will vary greatly from industry to industry. A hospital protecting patient's medical information should be much more diligent protecting information than a hobby-related blog that stores first names and emails. Whatever the balance, an agreement should be reached between the business and IT leaders. Everyone on the executive committee should have a full understanding of what the risks to the business are and a general consensus as to what they should be, regardless of whether the source is technology-based or not.

So then who is to blame when a breach occurs?

That discussion is too large to be included in this blog post, so it will have to wait for another time.
Edit: I wrote a post about this in August. http://scottnorberg.blogspot.com/2014/08/who-is-to-blame-when-security-breach.html

Summary

The idea that an IT system should be 100% safe from hackers and other malicious users is a fantasy. Systems are too large and complex, and threats too varied, to expect 100% security for any system. This doesn't mean that companies shouldn't try to stop attackers, however. The amount of effort appropriate to stop these attacks will vary greatly depending on the industry, company, and data being protected, so business leaders and IT professionals need to be in constant communication as to what effort is appropriate to keep company (and customer) information safe.