Baccarini 2004: Management of Risks in Information Technology Projects

Citation

Baccarini, D., Salm, G., & Love, P. E. D. (2004). Management of risks in information technology projects. Industrial Management & Data Systems, 104(4), 286–295. https://doi.org/10.1108/02635570410530702

Notes

This article has a good half-page summary of risk and risk management on page 287.

Risk management is an essential practice in achieving the successful delivery of IT projects (Tuman, 1993; Remenyi, 1999). More specifically, it consists of the following processes (Standards Australia, 1999):

  1. establish the context;
  2. identify risks;
  3. analyse risks;
  4. evaluate risks;
  5. treat risks;
  6. monitor and review; and
  7. communicate and consult.

The treatment of risk involves the determination of the most appropriate strategies for dealing with its occurrence (Standards Australia, 1999). According to Zhi (1994), there are four main strategies for responding to project risks:

  1. Avoidance – not undertaking the activity that gives rise to risk.
  2. Reduction – reduce the probability of a risk event occurring, and/or the impact of that event. Risk reduction is the most common of all risk-handling strategies (Pritchard, 1997).
  3. Transfer – transfer of risk in whole or part to another party.
  4. Retention – accept risk and therefore the consequences should it eventuate.

McFarlan (1981) suggested that projects fail due to lack of attention to individual project risks, aggregate risk of portfolio of projects and the recognition that different types of projects require different types of management. Yet, IT risk management is either not undertaken at all or is very poorly performed by many, if not most organisations (Remenyi, 1999). A reason for this is that focusing on potential problems may be viewed as being negative. However, management often wants to instil a positive attitude towards the implementation of IT, as it is often viewed as “flagship” for change and subsequent process improvement within organizations.

A definition of risk:

Risk in projects can be defined as the chance of an event occurring that is likely to have a negative impact on project objectives and is measured in terms of likelihood and consequence (Wideman, 1992; Carter et al., 1993; Chapman, 1998).

Baccarini et al. identifies a few IT risks that overlap with my studies:

(3) Human behaviour:

  • Poor quality of staff. Standard of work is poor owing to lack of ability, training, motivation and experience of staff (Cooper, 1993; Yoon et al., 1994). This lack of experience can extend to hardware, operating systems, database management systems, and other software (Fuerst and Cheney, 1982; Nelson and Cheney, 1987).
This item mentions “motivation” which may be identity-based, but focuses on skills and experience.

(5) Technology and technical issues:

  • Application software not fit for purpose. There can be a perception among users that the software provided does not directly help them with completing day-to-day tasks. This can lead to low user satisfaction (Baronas and Louis, 1988).
  • Poor production system performance. The selected software architecture/platform does not meet the purpose for which it was intended, resulting in a system being released into production which is excessively slow or has major operational problems (Jones, 1993; Glass, 1998).
  • Incomplete requirements. Insufficient information has been obtained in the analysis phase, resulting in construction of a solution that does not meet project objectives (Shand, 1993; Engming and Hsieh, 1994).

The first two items seem to deal with some combination of the spec and the quality of the engineering (did they spec out the wrong product? did they include the wrong performance requirements?). The last item is specifically about the spec.

(6) Management activities and controls:

  • Continuous changes to requirements by clientStakeholders (includes users) continuously make changes to software functionality throughout the project life-cycle (Jones, 1993; King, 1994; Clancy, 1994).
  • Lack of agreed-to user acceptance testing and signoff criteria. The project close-out can be delayed owing to an unclear understanding of what constitutes sign-off and final solution delivery (Boehm, 1989).
  • Developing wrong software functionality. Design and construction of software may not meet the purpose for which it is intended (Boehm, 1989).

The first two seem to truly focus on reasons for feature creep. The first focuses on the client, though. The generic term “stakeholders” certainly can include developers, but that does not seem to be the intent of the author, in context. The reasons for asking for changes aren’t explored here.

The second item speaks to a quote a gameworker gave me: “The biggest problem in digital games is knowing when the game is fun enough to ship.” Development will continue until the software is fun enough to release, however that might be measured.
The third seems to overlap with items in 5. How is developing wrong software functionality different from incomplete requirements? If your spec is insufficient, you’re not going to develop the correct functionality. These definitions seem to need more detail.

(7) Individual activities:

  • Gold plating (over specification). The team is focussed on analysing and generating excessive levels of detail losing sight of the project’s objectives (Boehm, 1989; Turner, 1999; Cunningham, 1999). 
  • Unrealistic expectations (salesperson over sells product). Items promised for delivery to individuals by the vendor may be over sold and unrealistic (Maish, 1979; Ginzberg, 1981; Thomsett, 1995).
The gold plating item does not offer a reason for the risk. Why are they behaving this way? (Also, why is an individual activity attributed to the team?). As with other items, I’m unconvinced that unrealistic expectations come only from sales. Why can’t actions by other members of the project team cause a product to be oversold and unrealistic?

Leave a Reply

Your email address will not be published. Required fields are marked *