Jones 1994: Assessment and Control of Software Risks

Citation

Jones, C. (1994). Assessment and control of software risks. Englewood Cliffs, N.J: Yourdon Press.

Notes

This is older, but I’ve seen Jones referenced for software risk. The book tries to be a comprehensive look at risk in MIS software development (for the period). Jones (1994) fashioned their assessment of software risk after a medical text on diagnosis of communicable diseases, encouraging thorough assessment of problems within the software development process before attempting prescribed remedies. The sixty most software development risks identified as most common–the “result of assessing projects in several hundred enterprises” (p. 27)–are derived, ultimately, from practitioner evaluations of their own software processes.

 

These risks are noted to vary in importance and impact across “six generic software classes”:

For convenience in analyzing risk, consider the patterns of six generic software classes.

    1. Management information systems (MIS) projects such as accounting systems, claims handling systems, and the like.
    1. System software projects such as operating systems, telecommunication systems, or other software applications which control physical devices.
    1. Commercially marketed software projects such as spreadsheets word processors or CAD packages that are leased or sold to end users.
    1. Military software projects, which can include all those constrained to follow US military standards such as DOD–STD–2167A.
    1. Contract or outsourced software projects in civilian domain, where the bulk of the software is produced by contract personnel, as opposed to employees of the client organization.
  1. End-user software projects, where the software was developed by the intended user rather than by professional programming staff.

Risks in each class are limited to a list of the top five factors in terms of the percentage of projects in that domain which are at risk for that factor. It is worth noting that these are not mutually exclusive categories (outsourcing happens throughout the other five, for instance) and each category shares many factors with other categories.  Also, while commercial software is mentioned, the examples all pertain to productivity software and not entertainment software. The “several hundred” enterprises included in the identification of these risks probably did not include any form of entertainment or highly commercialized IT production. The factors are presented in alphabetical order, without further classification.

The “several hundred” enterprises included in the identification of these risks probably did not include any form of entertainment or highly commercialized IT production. While I don’t think Jones intended to include game development in these classes, games fit into many of them. Entertainment software can include functions often attributed to MIS (accounting systems and contact management, for example), so class 1 is at least a weak fit. Some entertainment software involves platform development (creating or modifying game engines), so games could be part of class 2. They are commercially marketed and leased/sold to end users, so class 3 might apply. While they can be produced for the military (the military has training games, and has produced promotional games for recruitment), I don’t think they’re a good fit for 4. A good deal of game development involves third-party outsourced coding, so they fit in class 5. Some are end-user projects, but I’m putting class 6 out of my scope (and I’m surprised Jones decided to include software produced by non-professionals; are they part of his intended audience?). It is interesting that games are often developed by project teams who consider themselves end users, but I think non-professional is an important part of Jones’s intent with class 6.

Jones indicates that feature creep is a top concern in classes 1, 4, and 5. He indicates that feature creep is the most common software risk across all six classes.

Chapter 3 also discusses feature creep in the context of most serious risks, and using the six classes of software again.

First in avoiding creeping user requirements are end-user software applications. When users develop their own software, the requirements are purely up to them to implement. Also, end user applications are usually fairly quick in development: a month would be a long schedule. Since the rate of keeping requirements is proportional to the schedule, a one month schedule might, at worst, trigger a 1% increment in functions.

Second are the MIS developers. Although creeping use requirements is a chronic problem for MIS projects, a suite of powerful technologies are available that can minimize this problem for typical MIS applications. Joint Application Design (JAD) and prototyping, for example, can reduce the severity and volume of unplanned user requirements. Quality Function Deployment (QFT) is also effective.

Third are contract software houses and outsourcing shops. These groups have a brutal and straightforward approach to reducing unplanned requirements: they charge for them.

Fourth are commercial software houses. These shops are at a disadvantage, because new requirements tend to come in from the companies’ own executives or from their own sales and marketing groups, rather than from actual customers. Also, with commercial software such as spreadsheets or word processors, there are thousands of users. Therefore approaches such as JAD or QFD, which assume a small number of users, are difficult to implement.

Fifth are military and defense contractors. If the contracts are on a time and materials basis, the contractors eagerly welcome new requirements. If the contracts are on a fixed price basis, the enthusiasm for new requirements is greatly diminished.

[…]

Systems software is sixth. They seldom use approaches such as JAD, because they don’t have the right mix of users. Their development schedules are long, so normal changes in business and technology can surface requirements that were not envisioned when the project began.

What stands out in that extended excerpt is the note regarding commercial software houses. Jones says they’re at a disadvantage because changes to specs tend to come from internal sources rather than end users, and that “thousands” of users make user-focused techniques hard to implement. Current commercial software products sell millions of copies to millions of users. Also, while the internal sources are listed as executives and sales or marketing staff, it’s not much of a stretch to include project team members.

Chapter 9 is entirely dedicated to a description of Creeping User Requirements. It includes the following:

Definition—A) New requirements or significant modifications to existing requirements that are made after the basic set of requirements have been agreed to by both clients and developers; B) Widespread failure to anticipate changing requirements and hence make no plans for how to deal with them.

Formal definitions are great. This will allow me to compare other papers regarding feature creep and see if other kinds of risk fall within this rather broad definition.

Root causes—The root causes of creeping requirements include the following: 1) Each time new users commissions a new project there will be some uncertainty in resolving true needs; 2) For large project that takes several years, normal business or even statutory changes may occur that must be part of the application; 3) The use of effective preventive technologies such as prototypes or JAD sessions may not occur due to corporate culture or the nature of the application itself; 4) The fundamental technologies for exploring and modeling requirements are fairly primitive; 5) software measurement technology prior to 1979 had no effective way of measuring creeping requirements.

It’s not attempting to be comprehensive (“include the following”) but manages to get many of the basics. It seems like some academic sources would call each of the root causes a different risk factor.

Some notes on each identified cause:

  1. I’m not sure what the emphasis on “new users” is for. Each project is unique, or it wouldn’t be a project. Each project, regardless of the experience of the users, will come with some uncertainty. “Users” is interesting as a source of projects, regardless. This apparently assumes a software development environment where potential clients (users) approach a development team with a project idea. With consumer IT, end users would seem rare as a source of projects. This also seems to assume that “true needs” should be knowable by “users.” How does this apply to cultural goods, like entertainment software? The concept of “users” would need to be strictly defined in order to understand how this applies to the domain of digital games.
  2. I think the opening phrase is unnecessary. Projects don’t necessarily need to be large or long in order to have business or statutory changes mess with them. It may be more likely the larger and longer the project, but is not exclusive to them. With my prior research, I’ve heard producers tell me that changing game markets have been a source of risk: Platform owners can change platforms, even releasing new generations of hardware. Regulation is also a risk, especially since games are an internationally distributed good and multiple regulatory environments can apply.
  3. I read “the application itself” as the software under development by the project team. Breaking the statement apart: 3a: “The use of effective preventive technologies … may not occur due to corporate culture;” or 3b: “The use of effective preventive technologies … may not occur due to the nature of the application itself” The former seems to assert that the user of preventative technologies may be antithetical to the culture of the project team or its host organization. The latter, calling out “the nature of the application itself,” is vague enough to possibly include “game” as the application’s nature, but certainly wasn’t intended that way in this source in 1994. Do games, or certain types of games, innately resist the use of otherwise effective technologies that prevent feature creep?One of the example technologies is JAD (Joint Application Development (JAD) which seems more a technique than a technology. JAD “is a process used in the life cycle area of the dynamic systems development method (DSDM) to collect business requirements while developing new information systems for a company…. The JAD process also includes approaches for enhancing user participation, expediting development, and improving the quality of specifications” (Wikipedia). This doesn’t seem like a “technology” as much as a method (in the IS context), but we’ll consider it as intended. While game companies often use playtesters, alpha and beta releases, and even open alpha and beta releases, we should also probably consider that the team members may also consider themselves users. JAD as a formal process is probably rare in games (I’ve never heard of it being used) but there may be some related techniques that would qualify here. My big question: Does interaction with the user make a better spec, or cause more drift later? The Wikipedia article says that JAD’s effectiveness isn’t well tested even today.
  4. This may be anachronistic. I’m not sure how Jones would evaluate today’s requirements modeling and exploration technologies, nor the guidelines for evaluating whether or not these technologies are “primitive.”
  5. I have no familiarity with “software measuring technologies” either within or outside the gaming domain. I do know other sources consider actual documented changes to the spec, including formal change orders, when they consider feature creep and deviation from the original (or a revised) spec.

So, what’s in all of this that might be attributed to the identities of the team members? What’s missing in that vein? What’s applicable to my research?

  1. On its face, this is out of scope. This is really about the expertise of users in defining projects. Even if we redefine “user” as “project owner,” it’s about experience and not identification. If we consider that identification with the product is because team members consider themselves end users (players), this may have some small relevance to my work, but it seems a stretch.
  2. Size and length of project are out of scope.
  3. 3b is about the nature of the product, not the relationship between developer and product. It’s also out of scope. If the cause of the creep is the workers’ identities regarding the tools used to track feature creep (number 3a), that’s out of scope. My work is currently aimed at identification with the product, not the tools. If there are techniques that consider user input (like JAD) that are in practice, my work can see if team members consider themselves users in this context, and how that affects spec origination and drift over time.
  4. Tools are out of scope.
  5. Tools are still out of scope.

Summarizing those causes: There’s nothing here about how the team’s attitudes, opinions, or attachments to the product may affect the tendency to change the spec, except possibly 3a, if I stretch the meaning a bit. Also, I’d expect user-centered techniques to help improve early specs but cause drift later, as users experience what they’ve co-designed and request or demand changes.

Leave a Reply

Your email address will not be published. Required fields are marked *