Advances in technology and new levels of automation have had many effects in operational settings. There have been positive effects from both an economic and a safety point of view. Unfortunately, operational experience, ﬁeld research, simulation studies, incidents, and occasionally accidents have shown that newand surprising problems have arisen as well. Breakdowns that involve the interaction of operators and computer-based automated systems are a notable and dreadful path to failure in these complex work environments. Over the years, Human Factors investigators have studied many of the “natural experiments” in human-automation cooperation – observing the consequences in cases where an organization or industry shifted levels and kinds of automation.
One notable example has been the many studies of the consequences of new levels and types of automation on the ﬂight deck in commercial transport aircraft (from Wiener & Curry, 1980 to Billings, 1996). These studies have traced howepisodes of technology change have produced many surprising effects on many aspects of the systems in question.
New settings are headed into the same terrain (e.g. free ﬂight in air trafﬁc management, unmanned aerial vehicles, aero-medical evacuation, naval operations, space mission control centers, medication use in hospitals). What can we offer to jump start these cases of organizational and technological change from more than 30 years of investigations on human-automation cooperation (from supervisory control studies in the 1970s to intelligent software agents in the 1990s)?
Ironically, despite the numerous past studies and attempts to synthesize the research, a variety of myths, misperceptions, and debates continue.
Furthermore, some stakeholders, aghast at the apparent implications of the research on humanautomation problems, contest interpretations of the results and demand even more studies to replicate the sources of the problems. Escaping from Attributions of Human Error versus Over-Automation Generally, reactions to evidence of problems in human-automation cooperation have taken one of two directions (cf. Norman, 1990). There are those who argue that these failures are due to inherent human limitations and that with just a little more automation we can eliminate the “human error problem” (e.g. “clear misuse of automation . . . contributed to crashes of trouble free aircraft”, La Burthe, 1997). Others argue that our reach has exceeded our grasp – that the problem is over-automation and that the proper response is to revert to lesser degrees of automated control (often this position is attributed to researchers by stakeholders who misunderstand the research results – e.g. (“. . . statements made by . . . Human Factors specialists against automation ‘per se’ ”, La Burthe, 1997). We seem to be locked into a mindset of thinking that technology and people are independent components – either this electronic box failed or that human box failed.
This opposition is a profound misunderstanding of the factors that inﬂuence human performance (hence, the commentator’s quip quoted in the epigraph). The primary lesson from careful analysis of incidents and disasters in a large number of industries is that many accidents represent a breakdown in coordination between people and technology (Woods & Sarter, 2000). People cannot be thought about separately from the technological devices that are supposed to assist them.
It is only natural that local managers should seek personal advancement. A problem of strategic planning is identifying those areas where there may be a clash of interests and loyalties, and in assessing where an individual has allowed vested interests to dominate decisions. The nature of the four classifications shown above is self¬explanatory. An understanding of where given products stand in relation to this matrix can be another essential element in strategic planning. For example, if a product is a cash cow, then it may be very useful, but it should be appreciated that it may be at an advanced stage in its life cycle and the cash it generates should be invested in potential stars. However, it is not always easy to distinguish between a star and a dog.
Many businesses have poured money into the development of products that they believed were potential stars only to find that those products turned into dogs. The Sinclair C5 (a small, battery-powered car) is often quoted as an example of this phenomenon. The promoters of this product in the 1980s proceeded on the basis that there was a market for such a car as a means of urban transport and that the C5 would enjoy a high share of this market. In fact, the only niche it found was as a children’s toy and it achieved only a low market share with little growth potential as such.
The general point this model makes is that current period cash flow is not an unambiguous statement on the performance of a product or business sector. To appreciate fully the performance of a product, one has to appreciate where the product stands in terms of the above matrix. A poor current cash flow may be acceptable from a product or service considered to be a ‘Star’.