6
Comparable Worth and Job Evaluation Validity Author(s): Jonathan Tompkins Source: Public Administration Review, Vol. 47, No. 3 (May - Jun., 1987), pp. 254-258 Published by: Wiley on behalf of the American Society for Public Administration Stable URL: http://www.jstor.org/stable/975904 . Accessed: 15/06/2014 21:28 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve and extend access to Public Administration Review. http://www.jstor.org This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PM All use subject to JSTOR Terms and Conditions

Comparable Worth and Job Evaluation Validity

Embed Size (px)

Citation preview

Page 1: Comparable Worth and Job Evaluation Validity

Comparable Worth and Job Evaluation ValidityAuthor(s): Jonathan TompkinsSource: Public Administration Review, Vol. 47, No. 3 (May - Jun., 1987), pp. 254-258Published by: Wiley on behalf of the American Society for Public AdministrationStable URL: http://www.jstor.org/stable/975904 .

Accessed: 15/06/2014 21:28

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve andextend access to Public Administration Review.

http://www.jstor.org

This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PMAll use subject to JSTOR Terms and Conditions

Page 2: Comparable Worth and Job Evaluation Validity

254

Comparable Worth and Job Evaluation Validity Jonathan Tompkins, University of Montana

Concern over gender-based pay inequities has brought renewed attention to the role of job evaluation in determining employee compensation. Job evaluation plays a central role, for example, in Remick's opera- tional definition of comparable worth: "the application of a single, point factor bias-free job evaluation system within a given establishment, across job families, both to rank-order jobs and to set salaries."' Comparable worth advocates, such as Remick, hope that job evalua- tion can be used to establish the comparability of jobs to assure equal pay for jobs of comparable value.

There are, however, a number of constraints on the use of job evaluation as a means for achieving pay equity. These include the lack of agreement regarding the definition of comparable worth, the lack of agree- ment regarding what it is that job evaluation measures, the absence of an absolute standard by which to measure "job worth," the difficulties of assessing the validity of job evaluation results, biases in factors and factor weights, subjectivity in application, and the use of multiple job evaluation plans by employers.2

This paper examines three of these constraints as an initial step in assessing the potential role of job evalua- tion in achieving pay equity. First, does achieving pay equity require the existence of an absolute standard by which to evaluate all jobs? Second, what is it that job evaluation measures? Finally, how can it be determined whether or not the results of job evaluation are valid? A central theme developed in this paper is that there are many ways to approach the task of validating job evaluation. The approach ultimately taken is deter- mined by how the purpose of job evaluation and the task of validation are defined in a given situation.

Absence of an Absolute Standard of Job Worth

It is generally agreed that no absolute or universal standard exists for evaluating job worth.3 Any assess- ment of job worth necessarily involves value judgments, and the values by which judgments might be made are not universally agreed upon. Critics argue that estab- lishing the worth of jobs is not possible in the absence of such a standard, while comparable worth advocates argue that absence of one standard represents no serious obstacle to achieving pay equity.

At the center of the debate is confusion regarding job evaluation's potential role in achieving pay equity. If achieving pay equity requires establishing the worth of

* Job evaluation is currently receiving renewed atten- tion for its potential role in reducing pay inequities. Ma- jor constraints on the use of job evaluation for this pur- pose include the absence of an absolute or universal standard for assessing job worth and the difficulties in- volved in validating the results of job evaluation. Evidence presented in this paper indicates that the absence of an absolute standard of job worth does not preclude employers from developing their own stan- dards for comparing jobs to reduce pay inequities in their workforces. Problems of validation are reduced when it is understood that job evaluation measures job content rather than job worth. Moreover, efforts to establish job evaluation validity should focus less on social science techniques of validation and more on removing systematic biases from the evaluation process, as well as on enhancing legal and political validity.

all jobs nationwide, then the absence of a universally- accepted standard by which jobs may be evaluated represents a serious constraint. However, according to Heidi Hartmann, coauthor of the NAS report on pay equity, requiring all employers to use a job evaluation system developed and administered by the federal government is not the goal of most comparable worth advocates:

. . .in the early days, there was lot of discussion that there would be an economywide government regulation setting all pay rates for all jobs and that this must be what comparable worth advocates wanted. I think most of the debate and discussion that has gone on in the last 3 or 4 years makes very clear, that what most people are talking about is orderly change within one employer.4

If, as Hartmann suggests, achieving pay equity re- quires establishing the comparability of jobs within in- dividual establishments rather than across all employers, then the absence of an absolute standard of job worth poses little problem. Each job evaluation plan will necessarily provide its own standards of job worth. All that is required is a job evaluation system that is relatively free of bias and ultimately acceptable to both management and labor.

MAY/JUNE 1987

This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PMAll use subject to JSTOR Terms and Conditions

Page 3: Comparable Worth and Job Evaluation Validity

COMPARABLE WORTH 255

It may turn out that efforts to obtain pay equity within individual establishments will not close the na- tional wage gap significantly. To the extent that women are concentrated in competitive sectors of the economy where wages are low, their average wages are likely to remain below that of males nationally. Nonetheless, the absence of an absolute standard for evaluation does not limit the ability of individual employers to select stan- dards for determining the relative value of jobs to ob- tain greater pay equity for women in their workforces.

What Job Evaluation Measures

Job evaluation is generally defined as a formal pro- cedure for hierarchically ordering a set of jobs or posi- tions with respect to their value or worth. It is not sur- prising, given this definition, that the contours of the comparable worth debate depend on how the term "value" or "worth" is defined. What is it that job evaluation measures? If job evaluation measures "worth," how is this term to be understood?

In attempting to answer these questions, it is useful to review the writings of the early developers of job evalua- tion plans.5 While this literature is noticeably silent regarding the sociocultural values underlying the job evaluation process,6 a review does reveal several basic premises regarding the purpose of job evaluation and what it measures:

1. The principal purpose of job evaluation is to pro- mote internal pay equity. An often-cited lesson from the Hawthorne experiments is that employee dissatisfaction is likely to result more from the relative level of pay than from the absolute level of pay. This theme, which ap- pears often in the early literature,7 is summarized well by Lanham:

Job evaluation helps establish the rate earned by an employee in rela- tion to the rate earned by his fellow employees. An employee may be satisfied with his absolute rate until he learns about a fellow employee who is receiving a higher rate for comparable work where there is no real differential between merit of work done or length of service on the job. His rate is too low in the relative sense.8

Thus, while job evaluation systems perform a number of service functions in today's organizations, their in- troduction in the 1930s and 1940s was prompted by the immediate need to rationalize compensation systems which were characterized by glaring wage inequities.

2. Achieving internal pay equity requires establishing the relative value or worth of each job to other jobs on the payroll. The early developers of job evaluation understood that they were searching for the means of determining the value or worth of jobs relative to other jobs in a given workforce. The fact that a job had no ab- solute value to an organization or to society was held to be so obvious that it did not require comment.

3. In measuring the relative worth of jobs, the prin- cipal determinant of value should be the content of the job itself. Stated differently, the relative worth of a job is determined by analyzing the demands that the job places on its incumbents. These job demands are

MAY/JUNE 1987

typically defined in terms of the duties that must be per- formed, the conditions under which they are performed, and the qualifications required to perform them. Any "intrinsic" or "inherent" value a job may be said to have, relative to other jobs, must be understood in terms of its essential work demands.

In practice, job content factors are typically selected as the basis for differentiating between jobs in a given workforce. Differentiations are based on the level of each factor found in each job; as examples, each job's level of difficulty, level of complexity, or level of skills required. Thus, when the developers of job evaluation plans spoke of measuring relative worth, they referred to measuring levels or degrees of job content relative to levels found in all other jobs being evaluated. While variables such as productivity, seniority, and supply and demand have historically played a large role in deter- mining final compensation levels, job evaluation pro- vides the foundation for wage determination by focus- ing on the nature of the job itself.

If we can agree that job evaluation is prem- ised not on the measurement of job worth per se, but on the measurement of job con- tent, then we can proceed to tackle prob- lems of validity in different ways.

It was the task of job evaluation, according to its developers, to arrange jobs according to whichever job content factors were deemed appropriate for evaluation purposes by a given employer. Stated differently, the early developers believed that while the results of job evaluation can be used as indicators of relative worth, it is job content that evaluation actually measures. This fact takes on greater significance when the issue of validity is considered.

Establishing the Validity of Job Evaluation

Related to the problem of defining "job worth" is the problem of demonstrating the validity of job evaluation instruments and their results. In assessing validity, the central question facing the researcher is whether or not the measuring instrument actually measures what it pur- ports to measure, i.e., do the measured differences reflect true differences in what is being measured. An initial task, then, is to determine what job evaluation purports to measure. Schwab begins with the premise that job evaluation purports to measure job worth, that job worth is a mental construct, and that construct validity is therefore the appropriate validation strategy. From this premise Schwab proceeds to detail the many, very real difficulties of assessing the construct validity of job evaluation instruments:

In construct validity there is no empirical indicator for the dependent variable or criterion. It is by definition a construct, that is, a mental definition of a variable.

This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PMAll use subject to JSTOR Terms and Conditions

Page 4: Comparable Worth and Job Evaluation Validity

256 PUBLIC ADMINISTRATION REVIEW

The conceptual (nonempirical) nature of the criterion has several im- plications for construct validation. Most important for the purposes at hand is the need to carefully define the construct. What is meant by worth? The identification of appropriate empirical construct valida- tion procedures depends heavily on how well the construct has been defined initially.

Unfortunately, job worth has not been adequately defined nor has a consensus emerged as to its meaning. In fact, authors have been remarkably reticent to offer any definitions of worth at all. As a con- sequence, there is very little basis for accepting one set of compensable factors as better representing worth than any alternative set.9

It may be the case, however, that the difficulties of validation identified by Schwab rest on an unnecessary preoccupation with the term "job worth." A review of the early literature strongly suggests that the designers of job evaluation plans did not intend to be taken too literally when they stated that job evaluation measures "job worth." They acknowledged that worth did not exist in absolute terms, and that job evaluation really measured the extent to which jobs possess levels or degrees of compensable factors representing job content or the demands made on job incumbents. If this view is correct, then "job content" is more than simply an operational definition of a mental construct labelled "job worth." The results of job evaluation may be used as indicators of job worth, but it is job content, not job worth per se, that job evaluation is designed to measure.

Stated differently, the early developers believed that while the results of job evalua- tion can be used as indicators of relative worth, it is job content that evaluation actually measures.

If we can agree that job evaluation is premised not on the measurement of job worth per se, but on the measurement of job content, then we can proceed to tackle problems of validity in different ways. In methodological terms, the advantage of focusing on "job content" is that it is not a mental construct in the same sense that "job worth" is. Job content has an em- pirical reality that the construct of job worth does not. Job demands and qualifications can be determined through careful job analysis, and differences between jobs can be measured in a relatively direct manner. This is accomplished by operationally defining job content in terms of factors and then measuring the extent to which jobs possess these factors.

Having once accepted the premise that it is job con- tent that job evaluation purports to measure, it is useful to take a second look at the strategies typically employed in the social sciences for validation purposes: content validity, criterion-related validity, and construct validity. Content validity involves designing the measurement instrument in such a way that the content of that which is to be measured (e.g., job demands) is clearly represented in the content of the instrument itself. An instrument has content validity to the extent

that such a relationship can be demonstrated. In the social sciences this is often difficult since what is being measured is intangible or not directly observable (e.g., intelligence). But establishing the content validity of a job evaluation instrument should be a relatively simple matter since most plans measure observable, job- content characteristics. If position descriptions are prepared after thorough job analysis, and if the factors selected are representative of the full range of job con- tent characteristics found in the workforce, then the in- strument should be content valid.

Because job evaluation may be defined as a test under the Equal Employment Opportunity Commission (EEOC) Uniform Guidelines, employers must be prepared to demonstrate the validity of job evaluation plans if they produce an adverse impact on members of protected groups. This requires, at a minimum, demon- strating content validity. However, demonstrating con- tent validity is seldom sufficient by itself because a con- tent valid instrument does not necessarily produce valid results. Systematic biases in the instrument or in the procedures of evaluation may reduce the validity of the results.

Criterion-related validity, or predictive validity, is established by correlating the results of the measure- ment instrument (e.g., an educational exam or employ- ment selection test) with some criterion (e.g., college grade point average or job performance). A college en- trance exam, for example, may be criterion valid to the extent that test scores accurately predict grades received in college. But as Schwab notes, this type of validity cannot be employed to test job evaluation because there simply is no empirical indicator for the criterion of job worth (or job content). In addition, the purpose of job evaluation is fundamentally different from the usual purpose of educational or psychological testing for which criterion-related validity is most appropriate (i.e., predicting aspects of individual behavior). For these reasons, criterion-related validity is simply not an ap- propriate strategy for validating job evaluation results.

Construct validation focuses on the extent to which a measure performs in accordance with theoretical expec- tations.10 It involves establishing a theoretical framework that allows one to generate a number of hypotheses, which if confirmed, indicates that the in- strument is construct valid. We may predict, for exam- ple, that results of a scale measuring self esteem will cor- relate with other variables such as the extent of a stu- dent's involvement in extracurricular activities. If many such predictions are confirmed through testing, then it is evidence that the scale itself validly measures self esteem. In the context of job evaluation, however, it is difficult to imagine what set of hypotheses could be tested to demonstrate the construct validity of an instru- ment designed to measure degrees of job content.

Validity as the Reduction of Systematic Bias

The discussion in the previous section is misleading to the extent that it suggests that the job evaluation process

MAY/JUNE 1987

This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PMAll use subject to JSTOR Terms and Conditions

Page 5: Comparable Worth and Job Evaluation Validity

COMPARABLE WORTH 257

can be validated as a whole through the use of content validity, construct validity, and/or criterion-related validity. In the absence of an absolute standard, social science techniques cannot demonstrate that the job hierarchy produced through job evaluation is a "cor- rect" representation of worth or even job content. For this reason it makes more sense to think in terms of validating each step in the job evaluation process to eliminate systematic biases that reduce validity. If one is satisfied that such biases have been eliminated or signif- icantly reduced at each stage of the evaluation process, then one can at least infer the validity of the end results. From this perspective, determining validity would mean establishing the validity of position descriptions, choice of factors, choice of factor weights, choice of factor degree definitions, and the evaluations themselves (assuming the use of a point-factor plan).

Establishing the validity of factor weights involves the greatest difficulties. Establishing their validity is impor- tant because different weights clearly produce different job hierarchies." Factor weights may be set in one of five basic ways. First, an employer may choose to accept the predetermined weights in a consultant's package and take it on faith that they have been validated in the past and will be generally valid for his or her organization. Second, weights may be derived from the pooled judgments of members of a designated committee based on discussion of the values of the organization. Third, weights may be derived statistically by correlating and regressing factor rankings against a whole job ranking based on the evaluation of a ranking panel. Factors with the highest correlation coefficients are assigned the highest weights. Fourth, weights may be derived from factor analysis using data generated by a Position Analysis Questionnaire. Fifth, weights may be derived by manipulating tentative weights until point scores for key jobs correlate highly with their "going rates."

The latter method, often defined as a "policy captur- ing" approach, is used most commonly. Some analysts, such as Fitzpatrick'2 and Fox,'3 prefer this policy cap- turing approach over others because market rates pro- vide the only practical, external criterion for establish- ing the validity of job rankings. The problem with this view is that market rates are not an appropriate criterion for validating an instrument designed to measure job content. Nevertheless, it may prove necessary to derive weights from market rates for the practical reason that, if internal job alignments deviate too much from exter- nal market alignments, recruitment and retention prob- lems are likely to develop. If going rates are used as guides to the selection of factor weights, however, developers must be careful to choose only those key jobs that are not unduly influenced by unionization, supply and demand, and historical undervaluation.

Acceptability as a Test of Validity

Assessing the validity of a job evaluation instrument remains a difficult task even where the focus is shifted from job worth to job content. This is because criterion-

MAY/JUNE 1987

related validity is not appropriate to the task, construct validity is difficult to demonstrate even when the con- struct of "job worth" is abandoned, and content valid- ity is seldom accepted by itself as sufficient demonstra- tion of the validity of a measuring instrument.

There may be ways, however, of assessing the validity of job evaluation that, while less scientifically rigorous, allow conclusions to be drawn regarding a plan's valid- ity. First, from a legal perspective, the courts appear to be moving in the direction of defining what is or is not a legally acceptable, bonafide job evaluation plan. To an- ticipate the direction the courts may take, a job evalua- tion plan may be judged bonafide to the extent that it is job-related (content valid) and is as objective and neutral as possible in a given employment context.'4 While groups protected by Title VII of the Civil Rights Act may still experience lower wages than other groups, the evaluation system may be judged bonafide nonethe- less if the employer had taken appropriate steps to remove sources of discriminatory bias. In practical terms, this would require that jobs are clearly docu- mented by a job analysis questionnaire or validated job description, factors and their weights contain no ob- vious gender or racial biases, and job classes are not purposefully designed to promote gender or racial segregation.

Any assessment of job worth necessarily involves value judgments, and the values by which judgments might be made are not universally agreed upon.

Second, job evaluation results may be judged valid to the extent that they are acceptable to both management and labor. If one accepts the view of the early devel- opers of job evaluation plans that the principal purpose of job evaluation is to achieve internal equity, then the validity of evaluation results can be demonstrated by their acceptability to management and labor. To use language recently employed by Schwab, I5 this would in- volve taking an "institutional perspective" of job evaluation. This approach recognizes that job evalua- tion is inherently subjective and that there is no one set of "correct" factors or factor weights around which to build a job evaluation plan. The choice of factors and factor weights is ultimately a political decision and, as with all political decisions, the standards finally selected are subject to the test of acceptability. Thus, despite the subjective nature of job evaluation, if both sides feel that wages are internally equitable, then the plan and its results may be judged valid.

Conclusions

Past efforts to validate job evaluation plans and their results have been hampered by an unnecessary preoc- cupation with the concept of "job worth." Because job content is what job evaluation actually measures, neither the development nor the validation of a job

This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PMAll use subject to JSTOR Terms and Conditions

Page 6: Comparable Worth and Job Evaluation Validity

258 PUBLIC ADMINISTRATION REVIEW

evaluation plan requires employers to define job worth distinct from their choices of job content factors, their definitions, and weights. It is the task of job evaluation to arrange jobs hierarchically according to whichever job content factors are deemed appropriate by manage- ment and labor. These choices become the standards for assessing job worth in a given employment context, and they need not be justified in terms of some more univer- sal understanding of job worth.

In addition, rather than attempting to use social science techniques to validate the results of job evalua- tion as a whole, employers should focus attention on validating the results of each stage of the evaluation process. Job descriptions should rest on careful job analysis, steps should be taken to assure the reliability of job ratings, and factors and their weights should be

scrutinized for evidence of gender biases either by com- mission or omission. In this way systematic biases that may reduce the validity of the final job hierarchy can be identified and eliminated. The final goal is a plan that is reliable, legally defensible (relatively free of discrimina- tory bias), and politically acceptable to both manage- ment and labor.

Jonathan Tompkins is Assistant Professor of Political Science at the University of Montana where he teaches courses in public administration and American politics. His current research interests lie in the areas of job evaluation and pay equity.

Notes

1. Helen Remick, "The Comparable Worth Controversy," Public Personnel Management Journal, vol. 12 (Winter 1983), pp. 37 1-382.

2. See Donald Treiman, Job Evaluation: An Analytic Review (Washington: National Academy of Sciences, 1979); Donald Schwab, "Job Evaluation and Pay Setting: Concepts and Prac- tices," in Comparable Worth: Issues and Alternatives, Robert Livernash, ed. (Washington: Equal Employment Advisory Council, 1980); Richard W. Beatty and James R. Beatty, "Some Problems with Contemporary Job Evaluation Systems," in Comparable Worth and Wage Discrimination, Helen Remick, ed. (Philadelphia: Temple University Press, 1984); Helen Remick, "Major Issues in a priori Applications," in Com- parable Worth and Wage Discrimination, Remick, ed., op. cit.

3. See Donald Treiman and Heidi I. Hartmann, Women, Work and Wages: Equal Pay for Jobs of Equal Value (Washington: Na- tional Academy Press, 1981), p. 70; Ronnie Steinberg, "'A Want of Harmony': Perspectives on Wage Discrimination and Comparable Worth," in Comparable Worth and Wage Discrim- ination, Remick, ed., op. cit.

4. U.S. Congress. House. Pay Equity: Equal Pay for Work of Comparable Value, Joint Hearings before Subcommittees of the House Committee on Post Office and Civil Service, 97th Cong., 2nd Sess., 1982, p. 192.

5. See, as examples, Merril R. Lott, Wage Scales and Job Evalua- tion (New York: Ronald Press, 1926); Charles W. Lytle, Job Evaluation Methods (New York: Ronald Press, 1954); Eugene Benge, Samuel Burk, and Edward Hay, Manual of Job Evalua- tion: Procedures of Job Analysis and Appraisal (New York: Prentice-Hall, 1941).

6. For a good discussion of how job evaluation ultimately measures our cultural values, see Remick, "The Comparable Worth Con-

troversy," op. cit. 7. See Benge, Burk and Hay, op. cit.; Jay Otis and Richard

Leukart, Job Evaluation: A Basis for Sound Wage Administra- tion (New York: Prentice-Hall, 1948); Elizabeth Lanham, Job Evaluation (New York: McGraw-Hill, 1955); John A. Patton and C. L. Littlefield, Job Evaluation: Text and Cases (Homewood, IL: Irwin, 1957).

8. Lanham, op. cit., p. 2. 9. Donald Schwab, "Job Evaluation and Pay Setting: Concepts

and Practices," in Comparable Worth: Issues and Alternatives, Livernash, ed., op. cit., pp. 58-59.

10. See Edward Carmines and Richard Zeller, Reliability and Valid- ity Assessment (Beverly Hills: Sage, 1981).

11. Treiman, op. cit., pp. 34-39. 12. Bernard H. Fitzpatrick, "An Objective Test of Job Evaluation

Validity," Personnel Journal, vol. 29 (September 1949), pp. 128-32.

13. William M. Fox, "Purpose and Validity in Job Evaluation," Personnel Journal, vol. 41 (October 1962), pp. 432-37.

14. See, as examples, Marsh W. Bates and Richard G. Vail, "Job Evaluation and Equal Employment Opportunity: A Tool for Compliance-A Weapon for Defense," Employee Relations Law Journal, vol. 1 (Spring 1976), pp. 535-546; John R. Golper, "The Current Legal Status of 'Comparable Worth' in the Federal Courts," Labor Law Journal, vol. 34 (September 1983), pp. 563-580; David L. Gregory, "Comparable Worth: The Demise of the Disparate Impact Theory of Liability," Detroit College of Law Review, vol. 1982 (Winter 1982), pp. 826-854.

15. Donald Schwab, "Job Evaluation Research and Research Needs," in Comparable Worth: New Directions in Research, Heidi Hartmann, ed. (Washington: National Academy Press, 1985).

MAY/JUNE 1987

This content downloaded from 195.34.79.176 on Sun, 15 Jun 2014 21:28:43 PMAll use subject to JSTOR Terms and Conditions