12
Monitoring Service Workers via Computer: The Effect on Employees, Productivity, and Service REBECCA GRANT AND CHRISTOPHER HIGGINS Considerable controversy surrounds the use of Computerized Perfor- mance Monitoring and Control Systems (CPMCSs). The systems have triggered vocal opposition from trade unions and newspaper articles likening them to “Big Brother” in the workplace, reflective of “electronic sweatshop” work environments. A number of employees we interviewed believe that their employers use monitoring because “they don’t trust us to work unless the computer’s counting.” Furthermore, the systems have been linked to increased stress, health risks, and job dissatis- faction among monitored employees. Overall, opponents argue, CIPMCSs undermine customer service, teamwork, and the quality of work life. Yet, the use of these systems is increasing. In 1984, the National Association of Working Women reported that some 20% of clerical employees responding to a mass survey were monitored by computer. In 1987, the U.S. Congress Office of Technology Assessment reported that monitoring affected 25 to 35 percent of all clerical workers in the United States. Furthermore, the application of monitors has expanded to include technical and professional employees, such as stock brokers, pharmacists, and nurses. Even many taxicab drivers now find themselves being moni- tored by electronic systems mounted in their vehicles. Supporters argue that CPMCSs improve the consistency, clarity, and objectivity of performance measurement-an improvement over stressful, subjective evaluations performed by human supervisors. Monitors can make data about performance available more quickly and more frequently, increasing employee awareness of personal productivity. The systems may also be better at providing negative feedback in a nonthreatening manner. Rebecca Grant,Ph.D., is an assistant professor of information systems at the University of Cincinnati College of BusinessAdministrationin Cincinnati, OH. ChristopherHiggins, Ph.D., is an assmiate professor of management information systems at the University of Western Ontario School of Business Administration in London. Ontario. National Productivity ReviewlVoL 8, No. 2ISpring 1989 I01

Monitoring service workers via computer: The effect on employees, productivity, and service

Embed Size (px)

Citation preview

Monitoring Service Workers via Computer: The Effect on Employees, Productivity,

and Service

REBECCA GRANT AND CHRISTOPHER HIGGINS

Considerable controversy surrounds the use of Computerized Perfor- mance Monitoring and Control Systems (CPMCSs). The systems have triggered vocal opposition from trade unions and newspaper articles likening them to “Big Brother” in the workplace, reflective of “electronic sweatshop” work environments. A number of employees we interviewed believe that their employers use monitoring because “they don’t trust us to work unless the computer’s counting.” Furthermore, the systems have been linked to increased stress, health risks, and job dissatis- faction among monitored employees. Overall, opponents argue, CIPMCSs undermine customer service, teamwork, and the quality of work life.

Yet, the use of these systems is increasing. In 1984, the National Association of Working Women reported that some 20% of clerical employees responding to a mass survey were monitored by computer. In 1987, the U.S. Congress Office of Technology Assessment reported that monitoring affected 25 to 35 percent of all clerical workers in the United States. Furthermore, the application of monitors has expanded to include technical and professional employees, such as stock brokers, pharmacists, and nurses. Even many taxicab drivers now find themselves being moni- tored by electronic systems mounted in their vehicles.

Supporters argue that CPMCSs improve the consistency, clarity, and objectivity of performance measurement-an improvement over stressful, subjective evaluations performed by human supervisors. Monitors can make data about performance available more quickly and more frequently, increasing employee awareness of personal productivity. The systems may also be better at providing negative feedback in a nonthreatening manner.

Rebecca Grant, Ph.D., is an assistant professor of information systems at the University of Cincinnati College of Business Administration in Cincinnati, OH. Christopher Higgins, Ph.D., is an assmiate professor of management information systems at the University of Western Ontario School of Business Administration in London. Ontario.

National Productivity ReviewlVoL 8, No. 2ISpring 1989 I01

REBECCA G R A N T AND CHIUSTOPHER HICCINS

Despite growing adoption of monitors, few businesses can really antici- pate their potential effects or effectiveness. There have been few controlled studies of monitoring that could help companies predict possible effects, and the work that has been done has produced contradictory findings: Some studies suggest favorable effects on productivity while others indicate negative results. Many companies install a monitor when they introduce a new computer application system or improve an existing one. But this makes it difficult to separate the results of monitoring from those of changes in the work process.

Our research studied how CPMCSs affect service workers’ attitudes toward production and customer service. Among our main findings:

Using monitors does not automatically improve attention to produc- tivity, nor does it necessarily reduce attention to customer service. Monitors do not replace or improve upon human supervision. Fewer than one-third of service employees were actually opposed to

Most important, we found that the impact of a monitor will vary, depending on the way it is designed and used. Therefore, before we can evaluate the effects of CPMCSs on employees, we need to review the nature of computer monitors and the critical dimensions of their design.

computer monitoring.

MONITORS AND THEIR DESIGN There is no widely accepted definition of computerized performance

monitoring. Terms like “worker monitoring,” “electronic surveillance,” “performance monitoring,” and “worker surveillance” have all been used to describe various systems. According to the Office of Technology Assess- ment, there are three broad categories of monitoring systems:

those that focus onperfomnce. such as measuring keystrokes, use of computer time, or content of telephone conversations; those that focus on behaviors, such as measuring use of resources, tracking worker location via identification badges, testing predisposi- tion to error (e.g., drug testing); and those that focus on employee characteristics, such as state of health and truthfulness.

While systems in the second and third categories often incorporate computers or computer software. they also rely heavily on other methods to accomplish their objective, such as videotaping, voice recording, and chemical testing. Our research concentrated on systems in the first category, which depend on computers to monitor performance.

Monitors as Computer Systems Many discussions of monitors treat the systems as if they had human

capabilities or motives. Articles with such titles as ‘’The Boss That Never Blinks” in Time (July 28,1986) and “Big Brother Is Watching You Work”

102 National Productivity RcvkwfVoL 8, No. 2lSpring 1989

MONITORING SERVICE WORKERS vu COMPUTER

in The Toronto Star (October 7,1985) are typical. Although describing or analyzing monitors in human terms may make them easier to understand, it often misrepresents the actual features of CPMCSs. Thus, it is important to understand three characteristics central to monitor design and effects.

First, a monitor is nothing more or less than a set of computer programs and a sensor that detects the performance to be measured. It will handle activities according to its programs, regardless of the source or frequency of those activiites. The monitor is oblivious to an employee’s personality or attitudes about work. It does not favor one employee over another or discriminate against particular employees, unless someone expressly pro- grams it to do so.

Second, CPMCSs are capable of executing a variety of tasks. Some systems merely collect statistics about performance. Others evaluate those statistics, while still others actually direct work to employees. Any work activity that can be captured in terms a computer can interpret can be monitored. There is no single design that represents a typical CPMCS.

Third, CPMCSs address computer-mediated activity. This is work done directly on a computer, or work that produces information that is then used by a computer. For example, an insurance claims processor may work with an interactive system to calculate and pay a dental claim. A retail employee, on the other hand, may do personal selling and register sales on a point-of- sale terminal. In the first case, the computer is involved in the entire activity of paying claims. This means that the complete payment process could be monitored. The retail salesperson, however, only reports the result of activities (items and amounts sold) to an automated system. This means that less of the retail employee’s work is visible to a monitor. CPMCSs can only deal with activities that interact with an automated system. Other activities are beyond its scope.

CPMCS Design Options Besides humanizing CPMCSs, many articles that discuss their impact

on workers and the workplace generally treat monitors as if they had only one dimension. Employees are described as being monitored or not being monitored; a system is described as using low-level, moderate, or high-level monitoring. Our research, however, looked at the four dimensions of monitor systems, which have different effects on productivity and customer service and can vary in CPMCS design. (See Figure 1.)

1 . Object: Who is monitored? The more directly a CPMCS attributes performance to a specific individual, the more pervasive its design. Sys- tems may be designed to collect information about individual employees, such as the retail sales of each salesclerk or the daily collection of each loan officer. In categorizing designs on a scale from least to most pervasive, such systems represent the most pervasive approach-attributing performance at the level of the individual employee. Less pervasive systems might

National Productivity RcvuwNoL 8, No. 2ISpring 1989 103

REBECCA GRANT AND CHRISTOPHER HICCINS

Figure 1

DIMENSIONS OF MONITOR DESIGN

Least Pervasive

Most Pervasive

1. Object Business Unit .-) Work Group Individual Employee

2. Period Regular, Infrequent -+ Regular, Frequent -+ Immediate

3. Recipient Employee Supervisorhlanager -+ Public 4. Tasks Track Results 4 Track Process .-) Assign and Track

aggregate performance for an entire work group, while an unobtrusive design uses the business unit as the object of measurement. A retail chain, for example, may choose to have its point-of-sale systems report sales by individual clerks separately, total sales by department, or only totals by store. These choices determine the person or group to which performance is attributed, and thus who will be held responsible for maintaining or improving that performance.

2. Period orfrequency: How often are they monitored? The more immediate the availability of monitor data, the more pervasive the CPMCS. A supervisor may be able to query the system ai any time and obtain an up- to-the-minute status of monitored performance. As an alternative, the system may automatically and immediately report unacceptable situations. For example, a CPMCS can alert a supervisor when a telephone operator has been disconnected from the switchboard for more than five minutes. Or it can send a message to a group leader saying that the group’s error rate has exceeded 3 percent for the morning. These are pervasive designs. Less pervasive designs report on historical activity. They tell the recipient about performance at regular, fixed intervals, such as hourly, daily or

3. Recipient: Who receives data from the monitor? In general, the broader the audience for the data, the more pervasive the monitoring system. CPMCSs can be designed simply to provide feedback directly to the employee. These designs help the employee track and evaluate personal performance, without relaying that information to anyone responsible for overall performance evaluation. More pervasive designs make the data available to the immediate supervisor or area manager. The most pervasive designs broadcast information, making it available to anyone in the

weekly.

104 National Productivity RcvuwNoL 8, No. 2ISpring 1989

MONITORING SERVICE WORKERS VIA COMPUTER

workplace. For example, posting the performance of all monitored indi- viduals or groups in a central location or allowing anyone to access the data via the computer exemplify pervasive designs.

4. Tasks: What activities are monitored? As mentioned earlier, a CPMCS can monitor virtually any computer-mediated activity. The monitor can also be designed to capture a variety of information about the activity. The greater the coverage of the process of directing and complet- ing work, the more pervasive the monitor design. It may count completed transactions, error rates, completion time, or a combination of performance characteristics. It may also be an integral part of a pacing system that directs work to particular employees at predetermined intervals. Such monitors can then track individuals or groups who fail to act on work directed to their station.

HOW WORKERS FEEL ABOUT COMPUTER MONITORING We used two studies to examine the impact of monitor design. In the first

study, we explored the use of monitoring in three branches of a major insurance company. Eighty-five claims processors and clerical employees took part in individual interviews and completed surveys about their attitudes toward monitoring and their work. Thirteen supervisors and managers also participated in the interviews and surveys, giving their views on the effect monitors had on their employees.

This first study allowed us to compare the attitudes of monitored employees to those of employees who did the same job in the same company, but were not monitored. Overall, it demonstrated several aspects of monitoring.

First, employees expected consistency between the collection of moni- tor data and its use in their evaluation. System designs that collected certain data made employees believe the activity measured was important. It also made them concentrate on individual, rather than group, production. While this promoted attention to individual performance, it did have negative effects on teamwork. Monitored employees were less willing to pursue complex customer inquiries than their unmonitored coworkers and com- plained more of hostile or stressful work groups.

This first study also showed that employees accepted monitor measures as more objective, but not necessarily as “fair.” Employees agreed that the monitors were more consistent and accurate than the human supervisors in measuring production; however, they did not believe the monitor system was more fair, since its design did not adequately measure the interaction aspects of their jobs as well.

Finally, employees who saw their jobs as primarily quantitative or routine in nature had fewer complaints about monitoring. In general, employees who felt that quantitative standards or quotas were consistent with their view of the job requirements did not feel stressed by the use of

National Productivity ReviewlVoL 8, No. ZISpring 1989 105

REBECCA GRANT AND CHRISTOPHER HIGGINS

monitors. This also applied to employees who felt that the standards embodied in the monitor were readily attainable.

While indicating some important results of monitoring, a single case study does not make it clear if or how a monitor and the work environment interact. The same monitor design might produce different results in another company because of the specific work or the way the information is presented to the employees.

Thus, we surveyed 1,500 employees in 50 Canadian service firms to determine how different monitors interact with the work environment. These firms included: railroads and airlines; newspapers; banks, trust companies and investment firms; health, life and property insurance firms; government services (liquor retailers and wholesalers, public health, and auto insurance); public and private utilities; and telephone and telecommu- nications companies.

The surveys came from unmonitored and monitored employees, in firms with a wide variety of evaluation systems. Table 1 summarizes the char- acteristics of the participating companies’ evaluation systems, as indicated by the survey respondents. All respondents, regardless of employer, performed computer-mediated work and had direct customer contact.

Table 1

FEATURES OF EVALUATION SYSTEMS USED IN PARTICIPATING FIRMS

Respondents’ Reporting Measurement Done By

Computer Supervisor Neither

Keystrokes counted 15.8% 1.3% 79.2% Transactions counted 34.8% 8.0% 54.5%

Mistakes counted 12.4% 31.0% 53.7% Idle time reported 19.4% 3.7% 73.6% Transaction completion

time counted 16.8% 6.9% 73.1% Way work is done checked 5.3% 47.9% 44.3% Work directed to employee 10.1% 41.5% 45.3%

(N.B.: May not total loo%, due to item non-response.)

106 ~~

National Productivily ReviewNoL 8, No. 2lSpring 1989

MONITORING SERVICE WORKERS VIA COMPUTER

This national survey demonstrated the effects of monitor design and use in four areas:

Accepting Computer Measurement According to the survey results, all four dimensions of monitor design

affect the degree to which employees accept monitor data. These dimen- sions also affect the degree to which they believe employers rely on that information.

As Table 2 shows, increasing the pervasiveness of each dimension increases the perception that managers rely on the information being collected. Thus, the more pervasive the monitoring system, the more important it seemed to employees as a part of their evaluation. Increasing the pervasiveness had different effects on the acceptability of measurement. In the case of “object” and “frequency,” more pervasive designs led to more acceptable measures. Overall, this happened because employees believed that such designs provided more complete and accurate data. The designs tied performance data more directly to the responsible individual or group and to the specific performance being measured.

The opposite was true for “recipient” and “tasks.” Increasing the audience, or recipient, of the data reduced its acceptability. The most acceptable systems provided data only to the employee. Providing data to

Table 2

Dimension

EFFECT OF DESIGN DIMENSIONS ON ACCEPTANCEANDRELIANCE

Effect of Increasing Dimension’s Pervasiveness on Acceptance Reliance

Object Frequency Recipient Tasks

Increased Increased Increased Increased Decreased* Increased* Decreased Decreased

*Statistical constraints suggest that the effect may not be as large as the data analysis would argue. However, both survey and case study data demonstrated a strong trend in the direction indicated.

Natwnal Productivity RevicwNoL 8, No. 2lSpring 1989 107

REBECCA GRANT AND CHRISTOPHER HICCINS

first-level supervisors made the measures slightly less acceptable. More pervasive distribution (giving data to more senior managers or making it generally available) further reduced acceptance of monitoring. By the same token, a shotgun approach to monitoring all possible activities reduced acceptance. We believe that this arose from the nature of monitor imple- mentation. While unobtrusive systems were generally confined to quan- titative activities, more pervasive systems incorporated more qualitative supervision. In other words, unobtrusive systems concentrated on trans- action and error counting, while pervasive ones added such activities as assigning work and evaluating work in progress. Acceptance depends in part on the belief that the activity can be handled by a computer, and this belief may be violated by these more pervasive roles.

Survey respondents were also asked for their opinions on monitoring in general. Many articles in the trade and popular press claim or imply that most workers oppose the practice of monitoring. Our survey results do not support such claims. As shown in Table 3, employees voiced slightly more opposition to monitoring individuals than to monitoring groups. But the survey did not reveal overwhelming sentiment against the use of monitors per se.

Table 3

ATTITUDES TOWARD THE PRACTICE OF COMPUTER MONITORING

Statement 96 Responding Strongly Strongly Disagree Neutral Agree

1 2 3 4 5 6 7

Computer monitoring

Electronic surveillance

It’s okay to monitor

It’s okay to monitor

should be illegal 10.9 11.9 15.4 30.0 11.6 7.2 11.7

should be illegal 10.9 8.7 10.2 16.3 9.2 12.9 30.2

individuals 12.9 11.2 15.0 27.7 16.2 7.7 6.9

groups 10.9 10.5 14.2 28.4 19.6 8.7 5.6

108 Nd’onal Productivity RcviewNoL 8, No. 2ISpring 1989

MONITORING SERVICE WORKERS VIA COMPUTER

“Electronic surveillance” uses audio and video equipment to examine employee behaviors and personal characteristics. About 52 percent of survey respondents agreed to some extent that all electronic surveillance should be illegal. This compares to 30.5 percent agreeing with the statement that monitoring should be illegal. Thus, employees did differentiate between surveillance and computer monitoring.

This is an important distinction. Trade and popular press articles have discussed cases of pervasive monitor designs that incorporated video or audio taping. The fact that these monitors included surveillance, rather than that they monitored work, may be at the root of the vocal opposition to their use.

Importance of Customer Service and Other Interaction A common argument against monitoring is that it undermines customer

sewice. Our survey, however, found that monitors do not automatically reduce the importance employees attach to interaction or service.

It is true that monitor design affects perceptions of employer criteria. The survey showed that the more tasks a company monitored, the more the employees believed the company valued production over service. This was a major factor (although not the only one) contributing to personal impor- tance of various job dimensions. If the employee believed the company valued production, that increased the personal importance of production. But it also increased the personal importance of service. In other words, if management paid attention to any aspect of a worker’s performance, it encouraged that worker to rate all aspects of performance as important. This suggests that monitoring can actually improve attention to service by demonstrating management concern for performance in general.

Employees in both the case study and the survey demonstrated consis- tency in their personal view of importance. They tended to rate both production and service as important, or to rate them as equally unimportant. In essence, employees who considered one part of their job important considered every aspect of it as important.

Importance of Production Companies that consider monitors a way of motivating employees to

increase production may be disappointed. Using monitors did not automati- cally increase the importance employees attached to production. Instead, two features of quantitative measures actually influenced the importance employees attached to production. The first was the acceptability of the measurement. Using more acceptable measures made employees believe production was more important to their company. This in turn increased the importance they personally gave production. The second feature influenc- ing importance was how much the company relied on the quantitative data. The greater the reliance, the more importance the employee personally

Natwnal Productivity ReviewNoL 8, No. 2lSpring 1989 109

REBECCA GRANT AND CHRISTOPHER HIWINS

attached to production. But merely relying heavily on a monitor did not have a great effect. The stronger effect came from using acceptable measures.

Need for Human Supervisors Using a monitor to count consistently and accurately can improve a

control system. But this result is only achieved when employees believe the system can properly detect and measure the activity in question. Survey respondents looked for fair measurement when discussing their evaluation systems. This did not necessarily mean purely quantitative information. Instead, it meant information that was complete. accurate, and appropriate. When dealing with qualitative tasks, a human supervisor is often the only means for collecting and evaluating such infomation.

Furthermore, supervisors played a critical role in determining whether monitoring would be stresshl and whether feedback would undermine or promote satisfaction. As one monitored employee explained, “Our unit head deals with problems immediately. Other units feel pressure about their count all the time. Our unit head keeps everybody relaxed and still keeps average productivity up.” In short, supervisors clearly have a major role to play in monitored environments.

GUIDELINES FOR MANAGERS Computerized performance monitoring evokes strong and contradictory

responses among the many parties concerned with its use. Many businesses will informally attribute impressive gains in productivity to monitoring, but reliable statistics on these gains are difficult to collect. Newspapers and magazines report cases of severe damage to quality of work life and customer service, but it is quite possible that these are isolated cases of monitor abuse. We must continue to explore these problems in order to understand and predict effects reliably, as well as to avoid undesirable results. In the meantime, there are a number of questions managers can use to shape the design and use of effective computerized perfomance monitors.

1. How directly does the system measure individual performance? The more directly an individual’s performance is monitored and reported, the more exposed that individual feels. Monitors tend to shift the balance of power: They make the process of measuring performance less visible, while making the employee feel more visible. Workers can no longer look over and see if the supervisor is watching. Instead, there is a sense that the monitor is always watching. The fact that everyone is subject to the same scrutiny does little to relieve that sense of exposure. This is particularly true when employees feel the measures make them accountable for events beyond their control.

Properly managed, this increased visibility can be beneficial to employ- ees. But managers must demonstrate to their employees that the monitor

110 NOrional Productivity RcviewNoL 8, No. 2ISpring I989

MONITORING SERVICE WORKERS VIA COMPUTER

helps managers see thepositive, as well as the negative, aspects of employee performance.

Another factor determining the appropriate “object” should be the reason for collecting the data. If individual employee performance ratings or salary increases are not going to be tied to the output from the monitor, there is probably no reason to report or even collect individual performance data. If individual productivity is only being monitored to estimate staffing or training requirements, supervisors should not be referring to it in their regular formal or informal discussions with employees about personal performance.

2. How immediate and interactive is the data reporting? The more interactive the design, the more quickly performance problems can be recognized and dealt with. The survey results showed that more frequent measurement produced more acceptable data. However, increasing fre- quency can increase the sense that “Big Brother is watching.” This is most likely to occur when employees don’t believe the system measures impor- tant tasks.

Giving supervisors constant access to performance data may encourage them to overuse that access, and result in a situation where they have more data than they know what to do with. Comments written on the surveys indicated that many complaints about monitoring can be traced to misuse of the information once it reaches supervisors or managers. The more data available or the greater its frequency, the more likely will be a condition of information overload for supervisors.

3. How wide is the audience for themonitor output? The more people who see the results of performance, the greater the feeling of being exposed. Even high performers can find public display of their productivity undesir- able-particularly if that productivity is so high that their coworkers may ostracize them. In such cases, the public display of more accurate data is apt to provoke the high producers to slow down. And low producers may find the comparison of their performance to that of coworkers demoralizing, rather than motivating. These conditions exist with most types of evalu- ation systems, but can be heightened by the fact that monitor data seem more accurate and complete than other productivity measures.

Companies that wish to use the system to reinforce employee motivation should consider reporting the data directly to the employee, but not to the supervisor. As with public displays of data, it can be undesirable to direct data to anyone who doesn’t have an immediate need to know. Employees often believe that management uses whatever information it has available when evaluating performance and determining rewards. Furthermore, our research has shown that they may continue to hold this belief, even though they have no concrete evidence to support it.

4. Which tasks will be monitored? The choice of which activities to monitor sends employees a message about performance criteria. Monitor-

~

National Productivity ReviewNoL 8, No. 2lSpnng 1989 I l l

REBECCA GRANT AND CHIUSTOPHER Hrccms

ing a task may confirm an existing belief that it is important, or it may increase its perceived importance. But failure to include a task or behavior can be interpreted as a signal that management does not consider that factor worth watching.

Obviously, not all work dimensions can be monitored, and a shotgun approach to monitoring tasks is counterproductive. Employees react negatively to designs that simply increase the range of tasks measured. Select the most appropriate and important tasks to monitor and then measure them carefully. Use other systems to give explict attention to unmonitored work dimensions. This will result in a more balanced message about the company’s definition of good performance.

Our survey and case study research demonstrated that monitor design affects reactions to CPMCSs. Systems designed to maximize the accept- ability of quantitative measures can be effective in increasing attention to both production and interaction. At the same time, companies must augment monitors with subjective systems that effectively measure quali- tative activities. These complementary systems round out the evaluation, ensuring that it is complete and appropriate. Without them, managers are apt to find that neither production nor customer service improves.

112 National Productivily ReviewNoL 8, No. 2lSpring 1989