Bringing Desiign to Software Ch. 14 - Kuhn

Bringing Design to Software
© Addison-Wesley, 1996

Chapter 14

Design for People At Work

Sarah Kuhn

In the excitement over the seductive world of personal computing, it is easy to forget that the experience that most people have of computers in the workplace may not be liberating.... Most people who encounter computer-based automation at work do not choose the software with which they work, and have comparatively little control over when and how they do what they do. For them, the use of computers can be an oppressive experience, rather than a liberating one.




Introduction (by Terry Winograd)

Those of us who work in the computing profession--especially in design and development--see the computer as a wonderful resource. It is an opportunity for creativity, a tool for productivity, and a medium in which to create new worlds. But for much of the rest of the world, computerization is a derogatory term. People see computers threatening to change their lives in ways that are stifling, disempowering, and dehumanizing. This difference in perception often leads to suspicion and scorn on both sides: scorn of the "Luddites" who are stubbornly resisting progress, and scorn of the "technonerds" who love machines, but ignore human values.

Sarah Kuhn and her colleagues at the University of Massachusetts at Lowell address these problems in their studies of the implementation and use of computers in industrial settings. They collaborate extensively with the workers who encounter computer systems in their daily work, and who daily experience both satisfaction and frustration. In this chapter, Kuhn outlines an approach to systems design that takes into account the complex social and political factors that are at play in the development, deployment, and ultimate success or failure of the systems.

In the course of designing for the workplace, a software designer inevitably faces situations in which design choices are constrained by the conflicting goals and values held by the different parties who have a stake in the changes that new technologies will bring to the work. Workers and managers have many common interests, and they also have different stakes in how computers in the workplace change productivity, working conditions, and job satisfaction. In advocating an orientation toward human-centered design, using the techniques of participatory design (see Profile 14), Kuhn suggests how the software designer can serve the interests of workplace democracy by creating systems that give workers greater control over their work.




Design for People At Work

In the excitement over the seductive world of personal computing, it is easy to forget that the experience that most people have of computers in the workplace may not be liberating. The independent professional--who chooses her own software, experiments with new electronic tools, and sets her own schedule--lives in a different occupational world from that of the fast-food clerk, the bank teller, the machinist, or the airline customer-service representative. Most people who encounter computer-based automation at work do not choose the software with which they work, and have comparatively little control over when and how they do what they do. For them, the use of computers can be an oppressive experience, rather than a liberating one.

The effect of new computer software and systems on people at work depends on what assumptions about work are designed into the system, and on how well those assumptions correspond to the reality of what people do on the job. Well-designed systems can boost productivity, enhance job satisfaction, and give both workers and managers a clearer sense of what is going on in the organization. But a system that interferes with crucial work practices--either deliberately or through oversight--can result in reduced effectiveness and efficiency, reduced satisfaction and autonomy, and increased stress and health problems for the people who use the system.

 

Systems and Assumptions

First, let us look at some cases of systems that embodied assumptions about work, which were in conflict with the activities required to get the job done.

Case 1: The Trouble Ticketing System

The Trouble Ticketing System (TTS) was a mainframe-based system, developed in the early 1980s for use by telephone-company repair personnel, for scheduling, work routing, and record keeping. When trouble was reported, a job ticket was generated, and was sent to the appropriate telephone company office. There, a worker picked up the ticket and began work on the job. When the repair was completed, or when the worker had done all that she could do from that location, the ticket was sent back to the central TTS.

Before TTS, job tickets were generated, but work was more collaborative. Testers called one another, consulting with someone at the other end of a problematic line, or someone who knew a particular part of the system especially well, for help with troubleshooting. One of the motivations for the development of the TTS was to ensure that workers spent more time on repair tasks by eliminating conversations between workers, which were thought to be inefficient and "off task." With the TTS, each tester worked alone. If she could not complete a repair job, the tester recorded what had been done and sent the ticket back to the TTS for someone else to pick up and work on.

The need for conversation was eliminated, but the benefits of conversation (more information available to diagnose problems, and the chance to learn more about the system) were lost too. Because TTS also monitored the number of hours that each worker spent doing jobs, testers who spent time consulting with one another or training new workers (neither of which activities were accounted for in the system) were penalized. As reported by Patricia Sachs (1995, p. 27):

While TTS was designed to make job performance more efficient, it has created the opposite effect: discouraging the training of new hands, breaking up the community of practice by eliminating troubleshooting conversations, and extending the time spent on a single job by segmenting coherent troubleshooting efforts into unconnected ticket-based tasks.

Case 2: Big Bank

At a large urban bank, teller operations were supported by a computer-based system, which was designed and modified over time by the bank's systems department. The system had to meet the bank's needs for accuracy, efficiency, security, and customer service, although, as this case illustrates, not all of these goals can maximized simultaneously. The Big Bank system embodied in a rigid form the rules and procedures of the bank, which before automation would have been enforced by people--generally supervisors--who were relatively close to the action and who exercised professional judgment based on experience with the day-to-day work of bank tellers. One of the security features built into the system was that, "under a specified set of exceptional situations (defined by management at the bank through the setting of software parameter settings), the teller's terminal will freeze with a message about the account on the screen." (Salzman and Rosenthal, 1994, p. 94). The transaction could be completed only after a bank officer's authorization card was passed through a card reader on the teller's terminal. The purpose of the freeze was to ensure supervisory oversight in circumstances that were deemed exceptional, such as more than three transactions on a single account on a single day.

It turned out, however, that at a large downtown branch of the bank, the exceptional happened every 5 or 10 minutes. To keep lines flowing and to avoid costly inefficiencies, the manager gave a bank officer's card to the tellers, which they passed among themselves to unfreeze their terminals. Only once or twice an hour did the tellers judge that a supervisor was needed--when a customer was unknown, or when an unusual situation arose. Of course, overcoming this one feature undermined all security provisions in the system, because the officer card was now freely available to the tellers. As Salzman and Rosenthal (p. 97) reported, "This implementation of the system increases the responsibility of the teller although, at the same time, the design of the system is reminding the teller that, formally, the bank management does not trust him or her to make even routine decisions. The result of the system's design is the worst of both worlds."

Case 3: HELP System

In a large machining area in a production plant, a major U.S. aircraft manufacturer installed a new computer-based system known as the HELP (Help Employees Locate People) system. The system had two principal functions. The first--which was the source of its official name--was to enable machinists to signal for assistance when they needed replacement tooling, wanted consultation, or were due for a break. At the push of a button, a machinist could indicate his location and the nature of the request. This aspect of the system got good reviews from both machinists and shop management, because it enabled operations to run more smoothly.

The second function of the system, not acknowledged in the official name, was the monitoring of 66 machines in the shop. A panel in a control room above the shop floor displayed the status of each machine with colored lights. A supervisor could check these lights, and could gain further information by glancing out at the floor below. Daily reports told supervisors not only about production levels, but also about how each worker spent his time. Upper management received weekly and monthly reports.

The purpose of the monitoring system was to gain greater managerial control over how machinists spent their time, as an aid to increasing productivity. The problem was that the information captured by the system and reported to the managers provided only a partial view of the work of machinists--a view so one-sided as to be nearly useless. The very concreteness of the statistics, however, invited unwarranted conclusions. For example, managers wanted to discourage machinists from slowing down production unnecessarily, so the system was designed to report all the time that a computer-controlled machine tool spent halted or operating at less than 80 percent of the programmed feed rate (the rate at which the cutting tool moves across the surface of the metal). What the system did not report, however, was whether the programmer who wrote the program for that particular part had set the feed rate correctly. It is common for feed rates to be set incorrectly, and one of the skills of the machinist is to judge whether conditions require slowing down or allow speeding up. Information about the appropriate feed rate is crucial to the proper interpretation of the data generated by the monitoring system.

The data generated by the HELP system could mislead managers into thinking that any machine status other than "running" was an indication of unproductive activity. "One machinist...had to work long and furiously to set up a particularly intricate part to be cut. As a result, his machine sat idle most of the day. While he felt that he had never worked harder, his supervisor reprimanded him because the system reported that his machine was idle." (Shaiken, 1989, p. 295).

The sense of a hostile, intrusive presence in machinists' work life gave the system its unofficial shopfloor name: The Spy in the Sky. The system was tolerated by the machinists because of its benefits as a signaling device, and because their union was able to negotiate an agreement with the company that data from the system would not be used for disciplinary purposes. Still, the system could be used for informal discipline, and it served as a constant reminder of managers' mistrust of the workforce. Productive work relies on the skills of machinists, yet the monitoring system embodies the assumption that, if machines are running at less than 80 percent of the programmed rate, the fault lies with the machinist, rather than with the engineer who wrote the parts program.

 

Representations and Misrepresentations of Work

We have touched on many themes in these cases. One principal theme is that misunderstandings of the true context of work become embodied in computer-based systems, with negative consequences not only for the people who do the work, but also for the productivity and efficiency that the system is intended to enhance. The system, and the data collected by it, become themselves a representation of work. They define, at least within a limited context, what counts in the organization. At the telephone company, time "on task" counts, but training new workers to work more effectively does not. In the machine shop, time spent cutting metal counts, but time spent setting up, so that problems are avoided, does not. Of course, managerial discretion plays a significant role, too, but data from the computer carry the weight of seeming objectivity, and can be difficult to refute. (For interesting discussions of representations of work, see Suchman, 1995).

Sachs contrasts what she calls the organizational or explicit view of work with the activity-oriented or tacit view. An explicit view of work assumes that jobs are made up of a set of tasks or operations that could be defined, say, in a company handbook of methods and procedures. A tacit, or activity-oriented, view of work suggests that the range of activities, communication practices, relationships, and coordination that it takes to accomplish business functions is complex, and is continually mediated by workers and managers alike. An activity-oriented analysis of work centers on everyday work practices--on how employees actually make the business function effectively.

Sachs notes that all work contains both explicit and tacit elements, and argues that the efficiency of work is determined not so much by the logic and sequencing of taskflow as by the capabilities of people for troubleshooting vexing problems in complicated situations, which inevitably arise. She points out that most work-reorganization projects, such as workflow analysis and business-process reengineering, take only the explicit view, ignoring the tacit. The result is systems that ignore--or deliberately try to eliminate--tacit elements of work, sometimes making it more difficult (or even impossible) for workers to do their jobs effectively and efficiently. In these situations, workers often invent workarounds, to bypass the problematic limitations imposed by the system and to allow them to do their jobs more effectively. In the case of Big Bank, tellers and managers colluded in the workaround involving the shared officer card. In the Trouble Ticketing System, testers started to contact coworkers informally, using the TTS only to provide a formal record of the work. Although these workarounds correct some of the problems with the systems, they are a symptom of an inefficiencies that have been introduced unintentionally in the design process.

 

The High Cost of Bad Design

Computer-based systems that are poorly suited to how people actually work impose costs not only on the organization (in terms of low productivity) but also on the people who work with them. Studies of work in computer-intensive workplaces have pointed to a host of serious problems that can be caused by job design that is insensitive to the nature of the work being performed, or to the needs of human beings in an automated workplace:

 

Even those systems that avoid causing inefficiencies and workarounds still impose a heavy cost on the people who must perform stressful, routinized, exacting work. Health effects include stress, boredom, headaches, repetitive-strain injuries, neck and back strain, and other serious problems. Studies have confirmed the connection between job design and health outcomes; in particular, workers who face high job demands and who have low job control (inadequate power, authority, and autonomy to meet the demands their jobs place on them) are at significantly greater risk for serious health problems, such as cardiovascular disease (Karasek and Theorell, 1990).

 

Work-Oriented Approaches to System Design

In Chapter 13, Laura De Young describes how Intuit has thrived due to its strong focus on learning what customers want, on observing the customer's context of use, and on anticipating new features even before customers themselves have identified new needs. What happens if we try to translate Intuit's approach into a corporate setting? Two differences immediately become apparent. First, software in the workplace is situated in a context that is far more complex than that of software in the home--even in the home office. Second, in contrast to Intuit's customer base of individual users purchasing for home use, in corporate settings, the purchaser and the user are generally not the same. Whom is the developer trying to satisfy? To the extent that the end user and the person making the purchasing decision have different needs and objectives, whose should prevail? The traditional answer is that, to the extent that the customer's needs or objectives are considered at all, priority goes to the explicit needs and objectives of managers. This answer is true both for packaged software and for custom or tailored systems. If end users are considered, it is usually only to test whether they are able to use the system as intended. The two following sections describe two alternative approaches, focused on the end user and the work context: human-centered design and participatory design.

Human-centered design

Martin Corbett contrasts the usual approach to the design of manufacturing technology, which he calls the technology-centered approach, with a human-centered approach to design. The technology-centered approach is characterized by hard-systems thinking:

Hard systems thinking involves the imposition of a clear-cut problem definition on a relatively unstable organizational reality and a "fuzzy" system. It also means the adoption of linear, top-down design procedures that handicap design in a very complex organizational reality. The overriding concern in a hard design approach is technical design; little attention is accorded either the organizational context in which the system is to operate or the social implications of the system. The technology-centered approach leaves the engineering and computer professionals to decide the extent to which user participation is useful and permissible. (Corbett, 1992, 140-141)

By contrast, human-centered design (which is discussed from a different perspective by Denning and Dargan in Chapter 6) puts human, social, and organizational considerations on at least an equal footing with technical considerations in the design process, seeing operators (end users) as central to an effective manufacturing system. Well-designed technology should make use of human strengths--such as skill, judgment, capacity for learning--to create a robust and flexible production system, rather than seek to minimize and strictly control human intervention.

Participatory design

Advocates of participatory design (see Profile 14) emphasize the importance of meaningful end-user participation and influence in all phases of the design process. This approach was developed initially in northern Europe, among academics and practitioners concerned with the design of computer-based systems. In recent years it has spread among North American developers and researchers. Although projects differ in their precise definitions of what constitutes participation, this formulation of the basic requirements is typical: "The employees must have access to relevant information; they must have the possibility for taking an independent position on the problems; and they must in some way participate in the process of decision making." (Kensing, quoted in Clement and Van den Besselaar, 1993.)

In Sweden, beginning in the 1970s, the collective resource approach to participatory design stressed equality and collaboration between designers and users in the design of systems (see Ehn, 1992). One of the important motivations for this approach was a commitment to what the researchers and designers called industrial democracy. Concerned about the consequences that computer systems were having for job design and working conditions, they consciously sought to design systems that would help workers to retain (or regain) control over the planning, methods, and pacing of work. They did so in the context of asserting the broader rights of workers to have a voice over the design of technology, the control of company resources, and other decisions affecting the workplace.

In emphasizing industrial democracy, the proponents of the collective-resource approach were responding to what they saw as the failed promise of the sociotechnical systems approach, then highly influential in job design and industrial-democracy initiatives (see, e.g., Pava, 1983). Sociotechnical projects often lost their grounding in industrial democracy when conflicting interests between labor and management led to the adoption of management priorities over union priorities. Advocates of the collective-resource approach took a deliberately worker-centered, pro-union approach to technology design. They worked with trade unions in Scandinavia in projects focused on allowing the unions and their members to influence the design and implementation of the computer-based systems that were then being introduced. The UTOPIA project, for example, used mockups and prototypes, such as the one shown in Figure 14.1 to facilitate communication between designers and members of the typographers union, with whom they worked on new designs for newspaper-layout systems (Ehn and Kyng, 1991).


Figure 14.1 Prototyping UTOPIA Mockups, such as the one depicted here, were used as a medium for communication between computer-system designers and members of the typographers' union with whom they worked. Prototypes--even rough ones--can play a key role when designers cross boundaries between groups that have different domains of expertise (see Chapter 10). (Source: Reprinted by permission from Joan Greenbaum and Morten Kyng. Design at Work. Hillsdale, NJ: Erlbaum, 1991, 180.)


The Designer and Democracy

Designers of workplace computer systems often experience a tension between designing for efficiency and designing to support the autonomy and skill of workers.

First, let us look at the easy cases of inadequate attention to the work setting--those in which the tension is absent. In these cases, the designer's failure to understand the context and the tacit aspects of work lead to system features that make work less, rather than more, efficient and effective. The cases described in the opening section of this chapter fall into this category. In Salzman's words, these systems achieve "the worst of both worlds." Despite the challenges in diagnosing and remedying problems of this sort, fixing them is a win-win proposition: productivity is increased and the true demands of getting the job done are honored, making work more effective and more satisfying than before the fix.

The hard design problems, from this perspective, are the ones in which negative effects for people at work are side effects of a deliberate business philosophy or business objective. In these cases, designers experience a tension between promoting the autonomy of system users on the one hand and fulfilling organizational objectives on the other. Charles Richardson illustrates this distinction in his discussion of repetitive-strain injuries (RSIs):

There are four basic risk factors for repetitive strain injuries: force, awkward posture, repetition, and lack of recovery time. Force and awkward posture are mistakes. They could be called accidents of design. Somebody forgot to think about the person that's doing the job. The current focus on the redesign of work stations according to "ergonomic principles" is aimed at fixing some of the force and awkward posture issues. But repetition and no rest are different matters. Repetition and no rest are in fact design goals of a technology design, of workplace design. (Richardson, 1996)

The fragmentation and routinization of work--leading to repetitive, mind-numbing jobs--is a reflection of how an organization has chosen to respond to economic competition: by creating low-wage, low-skill, replaceable jobs. Furthermore, competitive pressure often leads companies to increase production not by becoming more efficient (producing more without increasing the work required), but rather by intensifying work (shortening or eliminating breaks, increasing production quotas, enforcing a faster pace, and requiring longer hours of work). The negative consequences for employee health and autonomy are a necessary, although not deliberate, byproduct of such a competitive strategy.

Management theorists have begun to question these strategies, suggesting that approaches that decentralize decision making and that push it to the lowest levels of the organization are more effective and adaptive in the long term. Total quality management, quality circles, employee involvement, Deming's quality methodologies, and other approaches advocate that managers seek the active and informed participation of the workforce. Design theorists such as Brown and Duguid (1992) describe how to design for learning and innovation, rather than striving to create idiotproof systems that stifle worker initiative.

Even in relatively enlightened organizations, the demands of the organization can still be in conflict with the principles of workplace democracy. There can be situations in which designers are faced with a choice between meeting management's stated objectives and enhancing workers' ability to plan and control their work.

In parts of Europe, the principle of codetermination (unions and management making joint decisions on matters that affect the company and the workforce) is well established. Laws ensure consultation with workers and limit negative effects of technology on work. Designers have some legal and cultural basis for making decisions that favor worker autonomy and control, even when doing so conflicts with management directives. In the United States, designers find themselves very much alone, with few places to turn, when confronted by these issues. Although this situation seems unlikely to change drastically in the near future for U.S. designers, there are three relevant precedents that may be helpful.

 

 

Suggested Readings

Paul Adler and Terry Winograd (eds). Usability: Turning Technologies into Tools. New York: Oxford University Press, 1992.

Pelle Ehn. Work-Oriented Design of Computer Artifacts. Stockholm: Arbetslivscentrum, 1988. (Distributed by Lawrence Erlbaum Associates, Hillsdale, NJ.)

Barbara Garson. The Electronic Sweatshop: How Computers are Transforming the Office of the Future into the Factory of the Past. New York: Penguin, 1989.

Harold Salzman and Stephen R. Rosenthal. Software By Design: Shaping Technology and the Workplace. New York: Oxford University Press, 1994.

Lucy Suchman (ed). Special Issue on Representations of Work. CACM 38:9 (September, 1995).

 

About the Author

Sarah Kuhn is Assistant Professor in the Department of Policy and Planning at the University of Massachusetts at Lowell. In addition to her work on social aspects of workplace computer systems, she has developed courses on software design, including a studio-based course in collaboration with Mitchell Kapor and William Mitchell at MIT.