14
Thomas Rabovsky is assistant professor in the School of Public and Environmental Affairs at Indiana University, where he teaches public management. His research largely focuses on accountability, performance management, managerial values and decision making, and higher education policy. E-mail: [email protected] Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 761 Public Administration Review, Vol. 74, Iss. 6, pp. 761–774. © 2014 by The American Society for Public Administration. DOI: 10.1111/puar.12274. Thomas Rabovsky Indiana University As performance-based mechanisms for accountability have become increasingly commonplace in the public sector, it is apparent that administrative reactions to these reforms are central in determining their effectiveness. Unfortunately, we know relatively little about the factors that drive acceptance of performance-based account- ability by administrative actors. is article employs data collected from an original survey instrument to examine the perceptions of presidents at American public colleges and universities regarding performance funding. e author finds that acceptance of performance as a basis for funding is driven by a variety of factors, including the partisanship of the state legislature, organizational per- formance (measured by institutional graduation rates), dysfunction in the external information environment, and the political ideology of university presidents. W hy do some agency leaders and public managers see accountability efforts as appropriate and legitimate, while oth- ers ardently oppose performance-oriented reforms? Considerable research on performance-based account- ability has concluded that attempts at implementing performance regimes effectively hinge largely on the extent to which agency leaders see these efforts as legitimate and appropriate (Dull 2009; Franklin 2000; Meier and O’Toole 2006; Moynihan 2008). ese actors are crucial in determining the success or failure of accountability policies, for a couple of reasons. First, they often have considerable discretion about the extent to which internal rewards and sanctions (both material and symbolic) are well aligned with the goals of external accountability efforts. Second, as organizational leaders, they can play a critical role in shaping the way that employees learn about and per- ceive these policies. Previous research has concluded that approval and support for accountability efforts on the part of organizational leaders represents an important precedent that must be met for these poli- cies to work (although it should be noted that there are certainly many other conditions that contribute to the success or failure of performance management regimes) (Dull 2009; Kroll 2013; Moynihan 2008; Moynihan and Pandey 2010). And yet we know very little about the factors that influence administrative leaders’ perceptions of these policies. One area in which this discussion has recently become salient is higher education (McLendon, Hearn, and Deaton 2006). As tuition rates have skyrocketed and the American economy faces increased pressure from the international arena, American universities have struggled to satisfy demands for improved perform- ance. is has caused a significant shift in the way that many states approach the need for accountability and transparency with regard to higher education. Whereas policy makers a generation ago were often willing to take a more passive and hands-off approach to regulation and oversight of public universities, today, there are increasing demands for universities to be held accountable for performance, particularly with respect to costs and undergraduate student outcomes (Zumeta 2001). In terms of state-driven accountability policies, this trend toward performance management has largely manifested itself through budgetary reforms and increased information reporting requirements. In some cases, this has involved relatively superficial and symbolic attempts to gather and publicize information about university performance, but in others, this has resulted in a shift toward the adoption of perform- ance-funding policies that are designed to directly tie institutional funding to benchmark indicators on stu- dent outcomes (Burke and Minassians 2003). ese performance-funding policies have been quite con- troversial and garnered considerable attention from academics and practitioners alike, but there remain several questions about their effectiveness (Aldeman and Carey 2009; Dougherty and Reddy 2011; Herbst 2007; McLendon, Hearn, and Deaton 2006). One of the major criticisms of performance funding in higher education is that despite relatively widespread popularity (at least among state lawmakers), many of these policies have proven to be unstable and are often either discontinued or dramatically altered after only Support for Performance-Based Funding: e Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

  • Upload
    thomas

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Thomas Rabovsky is assistant

professor in the School of Public and

Environmental Affairs at Indiana University,

where he teaches public management. His

research largely focuses on accountability,

performance management, managerial

values and decision making, and higher

education policy.

E-mail: [email protected]

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 761

Public Administration Review,

Vol. 74, Iss. 6, pp. 761–774. © 2014 by

The American Society for Public Administration.

DOI: 10.1111/puar.12274.

Thomas RabovskyIndiana University

As performance-based mechanisms for accountability have become increasingly commonplace in the public sector, it is apparent that administrative reactions to these reforms are central in determining their eff ectiveness. Unfortunately, we know relatively little about the factors that drive acceptance of performance-based account-ability by administrative actors. Th is article employs data collected from an original survey instrument to examine the perceptions of presidents at American public colleges and universities regarding performance funding. Th e author fi nds that acceptance of performance as a basis for funding is driven by a variety of factors, including the partisanship of the state legislature, organizational per-formance (measured by institutional graduation rates), dysfunction in the external information environment, and the political ideology of university presidents.

Why do some agency leaders and public managers see accountability eff orts as appropriate and legitimate, while oth-

ers ardently oppose performance-oriented reforms? Considerable research on performance-based account-ability has concluded that attempts at implementing performance regimes eff ectively hinge largely on the extent to which agency leaders see these eff orts as legitimate and appropriate (Dull 2009; Franklin 2000; Meier and O’Toole 2006; Moynihan 2008). Th ese actors are crucial in determining the success or failure of accountability policies, for a couple of reasons. First, they often have considerable discretion about the extent to which internal rewards and sanctions (both material and symbolic) are well aligned with the goals of external accountability eff orts. Second, as organizational leaders, they can play a critical role in shaping the way that employees learn about and per-ceive these policies. Previous research has concluded that approval and support for accountability eff orts on the part of organizational leaders represents an important precedent that must be met for these poli-cies to work (although it should be noted that there are certainly many other conditions that contribute to the success or failure of performance management regimes) (Dull 2009; Kroll 2013; Moynihan 2008;

Moynihan and Pandey 2010). And yet we know very little about the factors that infl uence administrative leaders’ perceptions of these policies.

One area in which this discussion has recently become salient is higher education (McLendon, Hearn, and Deaton 2006). As tuition rates have skyrocketed and the American economy faces increased pressure from the international arena, American universities have struggled to satisfy demands for improved perform-ance. Th is has caused a signifi cant shift in the way that many states approach the need for accountability and transparency with regard to higher education. Whereas policy makers a generation ago were often willing to take a more passive and hands-off approach to regulation and oversight of public universities, today, there are increasing demands for universities to be held accountable for performance, particularly with respect to costs and undergraduate student outcomes (Zumeta 2001).

In terms of state-driven accountability policies, this trend toward performance management has largely manifested itself through budgetary reforms and increased information reporting requirements. In some cases, this has involved relatively superfi cial and symbolic attempts to gather and publicize information about university performance, but in others, this has resulted in a shift toward the adoption of perform-ance-funding policies that are designed to directly tie institutional funding to benchmark indicators on stu-dent outcomes (Burke and Minassians 2003). Th ese performance-funding policies have been quite con-troversial and garnered considerable attention from academics and practitioners alike, but there remain several questions about their eff ectiveness (Aldeman and Carey 2009; Dougherty and Reddy 2011; Herbst 2007; McLendon, Hearn, and Deaton 2006). One of the major criticisms of performance funding in higher education is that despite relatively widespread popularity (at least among state lawmakers), many of these policies have proven to be unstable and are often either discontinued or dramatically altered after only

Support for Performance-Based Funding: Th e Role of Political Ideology, Performance, and Dysfunctional

Information Environments

Page 2: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

762 Public Administration Review • November | December 2014

with the appropriate levels of discretion needed to accomplish their performance targets (Brudney, Hebert, and Wright 1999; Moynihan 2008), about the potential for them to create perverse incentives that undermine core public values (Bevan and Hood 2006; Bohte and Meier 2000; Piotrowski and Rosenbloom 2002), and about the willingness of political actors to take performance information seri-ously (Gilmour and Lewis 2006; Hou et al. 2011; Th urmaier and Willoughby 2001).

Performance Funding in Higher EducationIn many ways, recent eff orts at performance-oriented reform in higher education have mirrored this discussion. While traditional accountability arrangements for public colleges and universities have revolved mostly around procedural and access issues and have largely been characterized as providing institutions with relatively little oversight or aggressive opposition, the last two decades have seen a dramatic shift in the approach taken by state governments (Zumeta et al. 2012). Whereas state governments in earlier generations often approached public universities with considerable deference and were focused primarily on access and input criteria for performance, the modern policy environment has become substantially more adver-sarial and output oriented (McLendon, Hearn, and Deaton 2006).

Much of this distrust has centered on concerns about the extent to which universities have been responsible in curtailing cost increases (Archibald and Feldman 2008a; McLendon, Hearn, and Deaton 2006). As costs have risen substantially across the higher educa-tion landscape (Archibald and Feldman 2008a), state governments have also reduced fi nancial support for public universities. As a result, public universities have increasingly turned to tuition and student fees to fund a larger share of their expenses (Delta Cost Project 2012). Th is increase in costs for students and their families has generated considerable accountability pressures from both state governments (McLendon, Hearn, and Deaton 2006; Zumeta 2001) and (more recently) the Barack Obama administration (Lewin 2013; Rodriguez and Kelly 2014).

Additionally, some observers have been critical of the overall per-formance of public universities. According to the most recent data, the average public college in America graduates less than 60 percent of its students, and graduation rates for many minority groups are much lower (Carey 2008). Many have attributed this lack of per-

formance to misaligned incentives for these institutions. Rather than rewarding universi-ties for focusing on undergraduate student outcomes, such as graduation rates and course completion, the current fi scal environment largely incentivizes enrollment. As a result, critics argue that public colleges often shirk on their responsibility for educating their undergraduates, choosing instead to focus on investments that aid in recruitment (i.e., construction of new dormitories and workout

facilities) and that promote research and development (i.e., reduced teaching loads for full-time faculty) (Complete College America 2010; Gillen 2013; Weisbrod, Ballou, and Asch 2008).

Some states have responded to these concerns by adopt-ing performance-funding policies for public higher education.

a few years (Burke and Minassians 2003; Dougherty, Natow, and Blanca 2012).

Performance Management as a Tool for External AccountabilityTh e performance management literature has highlighted several potential mechanisms for performance information to improve the public sector (Behn 2003). Th ese can be broadly separated into eff orts aimed at creating organizational learning and improvement (i.e., improvements to internal management) and increased trans-parency and accountability for the purposes of improving oversight and political responsiveness (i.e., external control) (Julnes 2008; Moynihan 2008). Within the literature on external control and accountability, performance regimes have attracted considerable attention for their potential to result in several major changes to the public sector.

First, performance management reforms seek to reshape incentives and sanctions for managers and public sector employees by giving them greater incentives to be entrepreneurial and results oriented. In exchange for this increased pressure to achieve results, managers within performance regimes receive increased autonomy and discre-tion to shape work processes and make decisions about how to best accomplish organizational goals. Th us, performance management can be seen as an extension of the New Public Management ideol-ogy that stresses managerial creativity and adaptability as mecha-nisms for improving public management (Moynihan 2008).

In addition to restructuring the incentive and sanction structures that managers in the public sector face, performance management regimes seek to aid external actors in their oversight responsibili-ties. By providing legislators, the media, and citizens with objective and actionable data about organizational productivity, performance management regimes seek to reduce informational costs associ-ated with oversight activities, thus improving the capacity for these external actors to hold organizations accountable for performance (Th omas 2001). Further, as external actors have access to more objective data about organizational performance, the quality of political deliberations should improve by becoming less ideological and politically motivated and more fi rmly rooted in evidence-based arguments about the extent to which public policies are eff ective in achieving important socially desirable outcomes, such as reducing crime and poverty and improving education, childhood development, and health care (Van de Walle and Bovaird 2007). Th us, some have highlighted the potential for performance regimes to result in “interactive dialogues” about the goals of public organizations and their eff ectiveness in achieving these goals (Moynihan 2008).

As performance regimes have become more and more commonplace over the last sev-eral decades, however, it has become increasingly apparent that they have been far less eff ective as a tool for administrative reform than many early proponents claimed they would be (Joyce and Th ompkins 2002; Moynihan 2008; Radin 2006; GAO 2005a, 2005b). In particular, critics have raised serious questions about the extent to which performance regimes have provided managers

Rather than rewarding universities for focusing on undergraduate student out-comes, such as graduation

rates and course completion, the current fi scal environment largely incentivizes enrollment.

Page 3: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 763

Volkwein and Tandberg 2008). While the empirical evidence on the reasons for the ineff ectiveness of performance funding remain limited, many have highlighted problems such as limited willingness of state actors to provide meaningful sums of additional funding for improved performance, as well as low buy-in and awareness of statewide performance objectives on the part of faculty and univer-sity administrators (Dougherty and Reddy 2011).

The Importance of Leadership for Performance-Based AccountabilityWhile there are a wide range of factors that doubtlessly infl uence the success or failure of any reform eff ort such as performance-based accountability, one of the crucial variables that previous research has found to be a driving force behind the effi cacy of these eff orts is the extent to which organizational leaders and other key agency actors react favorably (Dull 2009; Franklin 2000; Meier 2009; Moynihan 2008). Even within the policy domain of higher education, which has sometimes been characterized as an area in which organizational lead-ers are highly constrained in their ability to shape behavior (Cohen and March 1986), recent scholarship on performance funding

Performance-funding policies seek to reform higher education by integrating performance measures related to undergraduate student outcomes into the budgeting process (Burke 2005). While the size and scope of performance-funding policies can diff er substantially from state to state, the premise behind these policies is largely the same: as public universities are funded, at least in part, based on their ability to attain desirable levels of performance on selected metrics, state policy makers expect that administrators will shift priorities away from non-outcome-oriented activities and focus more extensively on bolstering undergraduate education. Since Tennessee adopted the nation’s fi rst performance-funding policy in 1979, 18 states have experimented with similar reforms (some multiple times), and 11 states currently have such policies in place. Table 1 provides a summary of the policies that each state has adopted, as well as the years that the policy was in eff ect.

Despite the promise and potential of these eff orts, a number of recent studies have concluded that performance-funding-oriented reforms have had negligible impacts on organizational perform-ance and student outcomes (Sanford and Hunter 2010; Shin 2010;

Table 1 Summary of Performance-Funding Policies and Performance Indicators

State Years in Effect Performance Indicators

Arkansas 1994–96 (fi rst funded in 1995) Graduation rates, retention, minority graduation rates, minority retention, licensure pass rates, exit exams, ad-ministrative costs, faculty teaching load, student body diversity, faculty diversity, alumni and employer surveys

Arkansas 2008–present Number of credit hours enrolled at the beginning of the term, number of course completionsColorado 1993–present (fi rst funded in 1994) Graduation rates, retention, minority student success, pass rates of graduates on technical exams, institutional

support/administrative expenditures per full-time student, class size, number of credits required for degree, faculty instructional workload, two institution-specifi c measures

Indiana 2007–present Graduation rates, bachelor’s degrees produced, degree completion for low-income students, research produc-tivity

Kansas 1999–present Indicators are specifi c to each institution (largely selected by the institutions), including measures such as graduation rates, retention, student body diversity, graduates’ scores on learning assessment exams, minority student outcomes, participation in study abroad programs, faculty credentials, and external research grants.

Kentucky 1996–97 Graduation rates, retentionKentucky 2007 (suspended after one year due

to budget cuts)Degree production per full-time student, minority student degree production, one indicator of choice (includes

graduation rates, student learning assessments, transfer credits, other indicators)Louisiana 2008–present Number of degree completers, minority student degree completers, number of completers in science, technol-

ogy, engineering, and mathematics fi eldsMinnesota 1995–97 (fi rst funded in 1996) Graduation rates, retention, ranking of incoming freshmen, minority student enrollmentMissouri 1991–2002 (fi rst funded in 1993) Graduation rates, bachelor’s degrees produced, bachelor’s degrees produced for minority students, scores of

graduates on national exams New Jersey 1999–2002 Graduation rates, cost-effi ciency, diversifi cation of revenuesNew Mexico 2005–present (fi rst funded in 2007) Graduation rates, retention, research productivity (for research universities only)Ohio 1998–present Primarily focused on external research grants awarded and tuition but also contains indicators for time to

degree and degree completion among at-risk studentsOklahoma 1997–present (suspended for one

year in 2001 due to lack of funds)Graduation rates and retention

Pennsylvania (PASSHE

only)

2000–present Indicators are broken into four categories: (1) student achievement and success, (2) university and system excel-lence, (3) commonwealth service, (4) resource development and stewardship. Indicators include graduation rates, retention, bachelor’s degrees awarded, faculty diversity, faculty productivity, student-to-faculty ratio, and cost per full-time student.

South Carolina

1996–2004 Total of 37 indicators, broken into nine categories: (1) graduate’s achievements, (2) quality of faculty, (3) instructional quality, (4) institutional cooperation and collaboration, (5) administrative effi ciency, (6) entrance requirements, (7) mission focus, (8) user friendliness, and (9) research funding. Indicators include graduation rates, faculty teaching and research credentials, student-to-teacher ratios, administrative cost effi ciency, SAT/ACT scores of entering freshmen, external research grants awarded.

Tennessee 1979–present Several indicators separated into four major categories: (1) student learning and access, (2) student, alumni, and employer surveys, (3) achievement of state master plan priorities, and (4) assessment outcomes. Indicators and benchmarks are updated and revised on fi ve-year cycles. Graduation rates, retention, minority student enrollment, and scores on learning assessment tests are generally among the major indicators.

Texas 1999–2003 Number of students defi ned as unprepared for college who successfully complete remedial courseworkVirginia 2005–present Retention, access for underprivileged populations, tuition, external research grants, contribution to economic

developmentWashington 1997–98 Graduation rates, retention, undergraduate effi ciency (ratio of credits taken to credits needed to graduate),

faculty productivity, one unique indicator for each university

Source: Rabovsky (2012).

Page 4: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

764 Public Administration Review • November | December 2014

Second, higher education represents a policy area in which it is challenging, although perhaps not entirely unreasonable, to employ performance-based accountability policies. Th ese institutions have diverse goals and mis-sions, some of which (such as graduation rates and retention) are fairly easy to track quantita-tively, but others (such as personal growth and development, overall contributions to culture, knowledge, and diversity) that are much more

diffi cult to measure. In contrast to some of the other types of public agencies in which some researchers have found performance manage-ment regimes to be eff ective (Behn 2006; Broadnax and Conway 2001; Poister, Pasha, and Edwards 2013; Smith and Bratton 2001), many of which have tended to be heavily oriented toward effi ciency (such as transportation and infrastructure or the administration and disbursement of Social Security benefi ts), higher education is consid-erably more complicated and messy. Th is “messiness” with regard to performance, however, is representative of the experiences that many, if not most, public agencies face (Koppell 2005; Radin 2006).

One of the major “pathologies” (Koppell 2005) that previous research has identifi ed regarding accountability arrangements is the potential for multiple principals to place competing goals on organizations. Th is may be particularly relevant for discussions about accountability in education policy. For instance, recent discussions about accountability and performance reforms in K–12 education have highlighted the diffi culty of applying standardized performance regimes to complex organizations such as schools, which often seek to accomplish a range of goals for students who may have consider-able variation in their aptitudes, interests, and socioeconomic back-grounds (Dee, Jacob, and Schwartz 2013; Ravitch 2010; Watanabe 2007). Similarly, in higher education, there are often confl icting pressures to increase access (particularly for low-income and minor-ity students), to experiment with new and innovative pedagogical approaches (such as online education), to expand research productiv-ity and bring in additional outside funding, and to improve perform-ance (sometimes without consideration of input constraints such as reduced funding or poorly prepared students as a result of failures in the K–12 system) that result from clashes between the diff erent kinds of values that are promoted by elected leaders, oversight boards and agencies, student and parent stakeholder groups, and profes-sional organizations responsible for creating rankings systems. In all cases, this diffi culty is compounded by the nature of the teaching and student learning, which is inherently diffi cult to observe directly and therefore can create dysfunctions in accountability and oversight (Wilson 1989). Th us, insights from experiences with performance-based accountability in higher education are advantageous in terms of identifying challenges and limitations for performance manage-ment, particularly as it relates to complex organizations such as schools, where there are a multitude of goals and where the produc-tion process is not easily observed by political principals.

Finally, as a result of federal and state reporting requirements, higher education has already developed relatively well-established indicators and performance metrics: graduation rates, retention, and degree production (Aldeman and Carey 2009; Archibald and Feldman 2008b; Burke 2005; Carey 2008; Kelly and Schneider 2012; Rabovsky 2012; Titus 2006). As opposed to some other areas,

has highlighted the importance of univer-sity presidents in shaping successful reform (Burke 2005; Dougherty and Reddy 2011; Immerwahr, Johnson, and Gasbarra 2008).

Organizational leaders, such as university presidents, have the capacity to infl uence the eff ectiveness of performance-oriented reforms for a variety of reasons (Dougherty, Natow, and Blanca 2012; Dougherty and Reddy 2011). First, they often have considerable freedom to structure both symbolic rewards and material resources that infl uence the internal incentives that employees have to participate in activities associated with satisfy-ing a performance regime (Dull 2009). Additionally, as the leader of the organization, these actors are often in position to frame debates about mission and performance within the political arena, which can provide meaningful cues to other employees about the extent to which an accountability regime is credible and legitimate (Moynihan 2008).

Despite the central importance of administrative perceptions of performance-based accountability mechanisms, existing scholar-ship has struggled to understand the sources of variation in the ways that administrators react to performance management reforms (Moynihan 2010). Moreover, even the broader literature on bureau-cratic values and administrative politics has been relatively limited in examining the impact of individual-level beliefs among organi-zational leaders and public managers on policy implementation (Meier and O’Toole 2006). While there have been some notable recent developments in eff orts to empirically measure bureaucratic values (Bertelli and Grose 2011; Clinton and Lewis 2008; Clinton et al. 2012; Rabovsky 2014), much of the existing research has relied primarily on the use of proxy measures, such as gender or racial and ethnic characteristics, to characterize the policy prefer-ences of public administrators (Hicklin and Meier 2008; Keiser et al. 2002; Meier and O’Toole 2006; Meier and Stewart 1991, 1992; Nicholson-Crotty, Grissom, and Nicholson-Crotty 2011; Roch and Pitts 2012; Selden 1997; Sowa and Selden 2003).

As a result of these limitations, we know relatively little about the factors that result in managerial acceptance or opposition of performance-oriented reforms. Th is article seeks to contribute to both the literature on performance management, as well as discus-sions about bureaucratic values more generally, by exploring a broad range of variables at both the individual and organizational levels to understand perceptions of the appropriateness of performance-based accountability.

Higher Education Policy and Performance-Based AccountabilityHigher education is a good place to examine these questions, for a couple reasons. First, as previously discussed, this is a timely topic that has received considerable attention throughout the higher education community in recent years. More important, it provides considerable variation on both institutional and political variables (state governance characteristics and external political environment) and organizational variables (mission, size, selectivity, resources, etc.). Th is gives substantial opportunity to examine many of the theoretical concepts related to the impact and importance of these variables with respect to performance regimes.

Higher education represents a policy area in which it is chal-

lenging, although perhaps not entirely unreasonable, to employ performance-based

accountability policies.

Page 5: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 765

categories for both questions ranged from 0 = “not at all” to 10 = “completely”). While these questions do not specifi cally reference the use of performance metrics, as opposed to more general concep-tions of performance, it is important to note that respondents were primed to think about metric-driven funding models by the follow-ing prompt at the beginning of the survey instrument:

As mentioned in the cover letter, many states link some portion of higher education appropriations to quantifi able performance measures, which can vary tremendously. Given the variation, it is diffi cult to capture all of the possibilities in a fi xed format survey. For this study, when we speak of per-formance data, we are referring to quantitative measures that capture some dimension of student outcomes.

Moreover, the survey also contained a battery of questions imme-diately after the key independent and dependent variables used here that asked respondents to rate the legitimacy and salience for a range of indicators that are commonly discussed in higher education (graduation rates, retention, cost, student diversity, U.S. News & World Report rankings, etc.).

Figure 1 illustrates the distribution of responses for perceptions of the importance of performance for current levels of funding. Not surprisingly, university presidents largely perceive performance as relatively unimportant when it comes to the amount of funding that their university receives. Th e mean score on this question was 2.64, and 66.7 percent of respondents rated the importance of funding for performance as 3 or lower. Interestingly, however, respondents from performance-funding states do, in fact, perceive that funding depends more on performance than respondents from states without such policies (mean score of 3.86 for institutions in states with performance funding compared to 2.22 for those in states without). Given the fi ndings of previous research, which found no connec-tion between performance and funding when examining objective budgeting data, this suggests that performance-funding policies have been somewhat successful in changing perceptions of the impor-tance of performance for funding, even though they often fail to provide substantial material incentives for improved performance.

Turning next to responses about the normative value of perform-ance-based funding (see fi gure 2), university presidents are much

where performance metrics and guidelines for data measurement are less well established, this makes it considerably easier to understand the way actors perceive attempts to measure performance. On the other hand, performance measurement in higher education remains open for discussion and debate, and thus it is not so rigid as to preclude variation in terms of perceptions regarding the validity and legitimacy of competing approaches to measure. In other words, performance data in higher education are well developed enough to connect with theoretical concepts such as effi ciency and equity but also subject to the kinds of persistent debate and disagreement that characterize policy making across a wide range of areas.

Survey of University PresidentsTh e data for this study come from a variety of sources. Most impor-tant among these is a survey of presidents at public colleges and universities, which captures both perceptions of accountability poli-cies and their impacts on higher education and values and beliefs regarding a variety of other issues, including beliefs about the ways that performance information is used by political actors in their state as well as their own political ideology.1 Following the 2011–12 academic school year, paper copies of the survey instrument were mailed to presidents at every public, four-year institution that was listed as bachelor’s degree granting or higher according to the 2010 Carnegie system for classifying colleges and universities.2 Of the 568 institutions that met these criteria, 138 respondents answered the survey, yielding a response rate of 24.3 percent.3 Survey responses were then merged with secondary data gathered from a variety of sources, including data from the Integrated Postsecondary Education Data System (IPEDS), as well as Carl Klarner’s (2012) data set of partisan balance in state government and the State Higher Education Finance Offi cers (SHEF) survey of state spend-ing on higher education. After listwise deletion, this generated 113 observations with usable data.

Perceptions of University Presidents about Performance and Institutional FundingTh e survey employed two questions to measure perceptions of performance-based funding. First, the respondents were asked, “How much does the amount of funding that your institution receives in state appropriations depend on performance?” Th ey were then asked to rate the extent to which they believed that funding (from state appropriations) should depend on performance (response

Figure 1 How Much Does the Amount of Funding That Your Institution Receives in State Appropriations Depend on Performance?

Page 6: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

766 Public Administration Review • November | December 2014

values systems (Bertelli and Lynn 2006; Friedrich 1940; Meier and O’Toole 2006; Perry and Wise 1990).

Within the context of performance management, public managers are likely to be infl uenced both by perceptions of external rewards and by their own internal values. In the case of higher education, we would expect administrators at institutions that are already perform-ing well on established benchmark indicators to be more accepting of performance-oriented reforms. Th is is both because they are likely to perceive the potential for revenue increases and because they are less likely to see performance-oriented reforms as a sub-stantial threat. In higher education, graduation rates have become an extremely popular metric for assessing institutional performance, both within academic research and within existing performance-funding policies (see table 1) (Archibald and Feldman 2008b; Burke 2005; Dougherty and Reddy 2011; Kelly and Schneider 2012; Rabovsky 2012; Zhang 2009). Because it generally takes one year after a cohort has graduated for these data to be collected and reported, this variable is lagged one year after the cohort graduated, or seven years after the cohort initially enrolled (thus, the gradua-tion rate for the 2004 cohort represents the information that policy makers and university actors had access to during the 2011 school year).4 Many performance-funding policies seek to reward both performance relative to benchmarks and performance trends (i.e., improvements over time), so I also include the fi ve-year change in graduation rates. Data on six-year (150 percent of normal time) graduation rates come from IPEDS.

Hypothesis 1a: University presidents whose institutions have higher graduation rates will be more accepting of perform-ance-based funding.

Hypothesis 1b: University presidents whose institutions experienced a positive trend in graduation rates will be more accepting of performance-based funding.

Another factor that is likely to be important in perceptions of per-formance-based accountability is the extent to which administrators have substantial fi rsthand experience with these policies. I include two measures of experience with performance-based funding. First, I measure the extent to which institutional funding currently depends on performance (using the aforementioned survey item regarding the importance of performance for funding) as a way to capture perceptions of the funding environment. Second, I include a dichotomous measure for whether the institution is located in a state with a performance-funding policy. States were coded as hav-ing performance funding if there was a broad policy that directly linked at least some portion of state appropriations to institutional performance. I relied on a series of reports and surveys to identify such states (Aldeman and Carey 2009; Burke and Minassians 2003; Dougherty and Reid 2007) and, in a few cases in which there was disagreement between sources or ambiguity about the nature of performance funding, I e-mailed state offi cials responsible for fi nance in higher education to confi rm the existence and design of the performance-funding policy.

It is unclear how we would expect exposure to performance-based funding to impact perceptions of the appropriateness of such poli-cies. One possibility is that university presidents who have fi rsthand

more supportive about the prospects of using performance in funding decisions than much of the mainstream narrative about accountability in higher education would suggest. Th e mean value for perceptions of the extent to which funding should depend on performance was 4.74, and 54.8 percent of respondents answered 5 or above. Th us, university presidents, in general, are fairly open to an expansion of performance-based funding, at least in principle.

In some ways, this fi nding that university presidents desire perform-ance-oriented funding is surprising, given the existing narrative about opposition to performance-funding policies on the part of many public universities. On the other hand, this fi nding under-scores the fact that many institutions have become frustrated with the funding environment in their states. As the political climate has become increasingly hostile toward higher education (Ehrenberg 2006; Zumeta 2001; Zumeta et al. 2012), performance-based funding may be viewed by some institutions as a way to increase funding. If, as a university president, you perceive that informal and political processes are likely to result in reduced funding, then a movement toward a more objective and data-driven funding model might be quite attractive, particularly if you know that your institu-tion is performing well on salient dimensions of performance. Th is assumes, however, that the funding policy will be crafted in a way such that it is based on reasonable expectations and fair treatment of public universities and that it actually rewards improved perform-ance. It also assumes that performance management is a serious attempt at improving higher education rather than an underhanded mechanism for policy makers to promote an ideological agenda aimed at privatization and reduced spending.

Administrative Reactions to Performance-Based FundingIn thinking about the motivational bases for administrative behav-ior, scholarship has largely evolved around two competing views. On one side are those who argue that public administrators can generally be conceived of as self-interested, budget-maximizing bureaucrats who are constantly working to exploit their informa-tional advantages in order to avoid meaningful oversight (Finer 1941; Niskanen 1971). In contrast with this self-interested (and somewhat adversarial) framework, others have argued that public managers are better viewed as intrinsically oriented individuals who are largely responsive to professional norms and their own internal

Figure 2. How Much Should the Amount of Funding That Your Institution Receives in State Appropriations Depend on Performance?

Page 7: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 767

seen as an opportunity to increase revenues. Conversely, institutions in states that have reduced funding for higher education may be more likely to see performance-based initiatives as part of a broader eff ort to decrease spending.

Hypothesis 4a: University presidents in states that had greater spending on higher education between 2010 and 2011 will be more accepting of performance-based funding.

Hypothesis 4b: University presidents in states that increased spending on higher education between 2010 and 2011 will be more accepting of performance-based funding.

In addition to the partisan makeup of the state and state fi scal support for higher education, I include three perceptual measures taken from the survey instrument about the ways that performance informa-tion is used by political actors. To capture perceptions of the extent to which performance information is used in dysfunctional ways, university presidents were asked whether they believe that state actors often manipulate data to make it say whatever they want, whether they perceive that data are primarily used for political posturing rather than substantive policy improvement, and whether they feel that hostile actors often use data to unfairly punish their institution. Exact question wording for these items can be found in the appendix. Th e three items were combined into a single index (α = .715).5

Hypothesis 5: University presidents who perceive that per-formance information use in their state is dysfunctional will be less accepting of performance-based funding.

In addition to these pragmatic motivations, I also expect administra-tive perceptions to be infl uenced by internal values, such as political ideology. Despite the fact that performance management is often trumpeted as a value-neutral, objective alternative to politically biased forms of decision making, many of these policies have in fact been implemented in ways that are clearly driven by ideology and partisanship (Clinton and Lewis 2008; Moynihan and Lavertu 2012; Radin 2006; Walker, Jung, and Boyne 2013). Rabovsky (2014), for instance, found that conservative university presidents were more likely to embrace performance management strategies as a mecha-nism for improving organizational performance and engaging exter-nal stakeholders. Further, given that the ideological underpinnings of performance-based accountability have often been associated with New Performance Management and that many of these initiatives have been embraced by political conservatives, I expect that univer-sity presidents who identify as more politically conservative will have greater acceptance of performance-based funding.

Hypothesis 6: Political conservatism will be positively related to acceptance of performance-based funding.

Because one of the criticisms of performance-based funding is that it will have diff erential impacts on institutions according to their mission and student body characteristics (Burke 2005; Dougherty and Reddy 2011), I also include several variables to capture impor-tant diff erences at the organizational level. Th ese include size (total enrollment), whether the respondent is president at an institution that is classifi ed as a research university according to the 2010 Carnegie basic classifi cation scheme, the percentage of funding that

experience with performance-based funding become comfortable with it over time, and thus they are less resistant to the idea of increased use of performance information in the future. On the other hand, particularly if experiences with performance-based funding have been largely negative or perceived as unfair, the reverse may be true, in which case university presidents will be less accepting of performance-based funding as they are exposed to these policies.

Hypothesis 2a: University presidents who have experienced greater exposure to performance reforms will be more accept-ing of performance-based funding.

Hypothesis 2b: University presidents who have experienced greater exposure to performance reforms will be less accepting of performance-based funding.

Additionally, given the failure of many performance-oriented reforms to live up to their potential, we might also expect administrators to be infl uenced by concerns about the extent to which these eff orts represent meaningful eff orts to improve performance as opposed to political gamesmanship. I employ three variables to capture charac-teristics about the external political and fi scal environment. First, I use the percentage of state legislators who are Democrats, collected from Klarner’s (2012) data set on partisan balance, to capture the partisan makeup of the state. Given that Republican lawmakers are often associated with less support for public spending on higher education (Archibald and Feldman 2006; Tandberg 2009; Weerts and Ronca 2012; Zumeta et al. 2012), we might expect university presidents to perceive that accountability eff orts in more conservative states are often thinly veiled attempts to move toward privatization or reduce state support for higher education. Th us, in states with more Democratic legislators, university presidents may be more likely to see performance management as a less threatening attempt to make a good-faith eff ort at improving governance and rewarding better per-formance. As a result, I expect that university presidents will be more likely to embrace performance-oriented reforms as the percentage of state legislators who are Democratic increases.

Hypothesis 3: University presidents in states with a higher percentage of Democratic legislators will be more accepting of performance-based funding.

Given the recent trends toward reduced funding for higher educa-tion on the part of state governments, I also employ two measures for state support of higher education. First, I include the amount of statewide appropriations for public higher education per full-time equivalent (FTE, measured in constant dollars). To pick up possible eff ects related to recent decreases in state funding, I also include a measure for the change in state appropriations to higher education per FTE that occurred between the 2010–11 and 2011–12 school years (measured in constant dollars). Th is data comes from the SHEF survey of state higher education fi nance offi cers, and I expect that both measures will be positively related to support for perform-ance-based funding. As performance funding was initially growing in popularity during the 1990s and early 2000s, many of these policies tied performance to bonus funds that institutions could receive above and beyond their base appropriations (Burke 2005; Dougherty and Reddy 2011). Th us, in states that have increased spending on higher education, performance-based funding may be

Page 8: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

768 Public Administration Review • November | December 2014

an eff ort to quell concerns that these policies will adversely impact institutions with large minority student populations (Dougherty and Reddy 2011). Th us, we might expect that leaders at institutions with more minority students will be apprehensive about the poten-tial for these policies to adversely aff ect their organization.

Hypothesis 8: Th e percentage of students who are black or Hispanic will be negatively related to acceptance of perform-ance-based funding.

Finally, I also control for experience, as measured by the number of years that a respondent has been president at the current univer-sity, and for race and gender, although I have no clear directional hypotheses about how these variables will impact acceptance of performance-based funding. Summary statistics for all variables can be found in table 2. In order to limit the potential for common-source bias (Meier and O’Toole 2013) and to gain insight into the importance on a variety of factors in the organizational and political environment, data for this analysis come from both the survey and from publicly available data sets. Diagnostic tests for nonconstant variance revealed heteroskedasticity, so I used robust standard errors in the ordinary least squares (OLS) analysis that follows.

Before proceeding to the multivariate OLS results, it is useful to examine the bivariate relationships between the independent vari-ables of interest and the dependent variable (perceptions of the amount that funding should depend on performance). Th is allows the reader to gain a better sense of the data and the relationship between variables of interest. Figure 3 presents a series of bivariate scatterplots (or, in the case of the two dichotomous independent variables, dotplots with the conditional means and 95 percent con-fi dence intervals) to help demonstrate these relationships. Although one should be cautious about putting too much emphasis on these plots, there are several important trends that emerge. In particular, there is preliminary support, at least on the basis of the bivari-ate relationships, for several of the hypotheses posed earlier. More specifi cally, I fi nd positive relationships between perceptions of the current importance of funding, the percentage of legislators who are Democrats, institutional performance (graduation rates), and the political conservatism of the university president and beliefs that performance-based funding is desirable. Conversely, I fi nd negative relationships for the percentage of funding that comes from state appropriations and perceived dysfunction in the external environ-ment’s use of performance data and acceptance of performance-funding. With this preliminary evidence in mind, I now turn to the multivariate OLS results.

ResultsResults for the OLS regression models are listed in table 3. As previ-ously discussed, I measure exposure to performance-based account-ability in two ways: (1) perceptions of the extent to which funding depends on performance and (2) whether the state has adopted a performance-funding policy. Given the fact that performance-fund-ing policies appear to be an important factor in shaping perceptions of how much institutional funding depends on performance, there are potential issues with endogeneity and multicollinearity for these two measures (r = .308). Th us, I ran separate models with each measure included independently, as well as a third model with both included. Models 1, 2, and 3 refl ect these alternative specifi cations.

the institution receives in state appropriations, as well as the per-centage of students who are either Hispanic of African American. As was the case with graduation rates, these data come from IPEDS.

Given that some public universities rely heavily on state appropria-tions, while others bring most of their money from external sources, there are also potentially some important diff erences in terms of the “publicness” across universities (Bozeman 1987) that may impact reactions to performance-based funding. In particular, I hypothesize that leaders at universities with a larger share of their funding from state appropriations may be more wary of performance-based fund-ing because such policies are likely to have a bigger impact on them.

Hypothesis 7: Th e percentage of funding that the institution receives from state appropriations will be negatively related to acceptance of performance-based funding.

Similarly, the measure for percentage of students who are black or Hispanic is important because much of the discussion about accountability has raised questions about the potential for these pol-icies to have diff erentially negative impacts for institutions that serve traditionally underrepresented groups (see Fryar 2011). Indeed, sev-eral states have now moved to include measures of minority student achievement and racial diversity in their performance policies in

Table 2 Summary Statistics

Mean SD Min. Max.

Funding should depend on performance

4.95 2.26 0 10

Funding does depend on performance

2.75 2.38 0 8

Performance-funding policy 0.22 0.42 0 1

% of legislators Democrats 44.17 13.29 19.05 82

Dysfunctional use of perform-ance information

4.49 1.31 1 7

Graduation rates (latest avail-able)

45.31 14.66 15.65 88.11

Five-year trend in graduation rates

–4.41 39.56 –207.26 94.48

Total enrollment (1,000s) 12.09 9.70 1.03 52.56

% funds from state appropria-tions

25.67 7.67 7.24 41.13

Statewide appropriations to higher education per FTE (constant $)

5,852.25 1,783.19 2,924.87 14,891.30

Δ in state appropriations to higher education per FTE (constant $)

–498.67 396.22 –1,785.01 583.14

Political conservatism (1 = strong liberal, 5 = Strong conservative)

2.73 0.93 1 5

Research (Carnegie) 0.27 0.44 0 1

White 0.89 0.31 0 1

Male 0.77 0.42 0 1

Experience (years at current university)

5.94 4.61 0.17 21

% minority students 20.35 20.72 2.27 96.05

Observations 113

Page 9: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 769

With regard to the external political climate, I fi nd that perceptions of the extent to which funding should be dependent on performance are positively related to the percentage of state legislators who are Democrats. A one-stand-ard-deviation increase in the percentage of Democratic legislators results in more than a 0.4-point increase in perceptions of the extent to which funding should depend on perform-ance (13.29 * 0.031 = 0.41). Conversely,

perceptions of the extent to which performance information is used in a dysfunctional manner are negatively related to perceptions of the extent to which funding should depend on performance. A one-standard-deviation increase in beliefs that performance information is used in dysfunctional ways within the political process (1.31) results in a 0.87-point decrease in acceptance of performance-based funding. Contrary to expectations, I do not fi nd a statistically sig-nifi cant relationship between increased in state spending on higher education per FTE and acceptance of performance funding.

Turning next to the institutional-level variables, objective measures of organizational performance (graduation rates) are positively related

Taken together, there are several important findings that emerge from these models. First, with regard to exposure to perform-ance-based accountability, I find that perceptions of the extent to which funding does depend on performance are positively related to perceptions of the extent to which it should depend on performance. A one-standard-deviation increase in per-ceptions of the importance of performance results in almost a one-point increase in perceptions of how much funding should depend on performance (2.38 * 0.320 = 0.76). Interestingly, however, experiences with performance-funding policies themselves have the opposite effect, although the coefficient is not statistically significant. While not entirely conclusive, this suggests that many problems related to per-formance-funding policies in higher education (particularly with regard to administrative responsiveness) have been a result of issues related to faulty design and implementation of the specific policies that have been adopted rather than an inher-ent flaw in the causal logic of performance-based funding and accountability.

Figure 3 Exploratory Analysis of Independent Variables

Perceptions of the extent to which funding does depend

on performance are positively related to perceptions of the

extent to which it should depend on performance.

Page 10: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

770 Public Administration Review • November | December 2014

importance that performance should play in guiding institutional funding. A shift from very liberal to very conservative is associated with an increase of more than 4.3 points on perceptions of the extent to which funding should depend on performance.6 Finally, I fi nd that white respondents are associated with increased acceptance of performance-based funding.

Discussion and Directions for Future ResearchOverall, the fi ndings from the empirical analysis make a number of notable contributions to our understanding of leadership and mana-gerial responses to performance management regimes. Th e fi nding that political ideology is strongly related to perceptions of perform-ance-based funding has several implications for theoretical research on bureaucratic values and administrative responses to external accountability (Bertelli and Lynn 2006; Bohte and Meier 2000). Most important, while performance regimes are often promoted as an apolitical, value-neutral reform, these results suggest that such claims should be approached with considerable skepticism. Th e fact that the partisan makeup of the state legislature infl uences percep-tions of performance-based accountability only reinforces this point. Despite many reformers’ eff orts in recent years to pursue bipartisan-ship for performance-based reforms, beliefs about the appropriate role of performance information in governing public institutions continue to be ideologically charged.

One interesting prospect for future research on this topic would be to explore the causal mechanisms for this divisiveness. It may be the case that diff erences in opinions related to the legitimacy of per-formance management are driven by deep normative beliefs related to the appropriateness of results-oriented government and the valid-ity of quantitative data. Rabovsky, (2014), for instance, found that political conservatism was positively related to organizational use of data for internal management. Alternatively, it may be the case that opinions about performance management are driven more by heu-ristics and group attachments, where political conservatives are more favorable to them because they perceive that performance manage-ment is often promoted by other conservatives. In other words, is this a clash of worldviews and ideologies or simply a confl ict related to political partisanship and the way that people interpret reform eff orts? As reformers think about ways to “depoliticize” perform-ance management, these questions will be of central importance. One prospect is that, over time, performance reforms will become less ideologically charged as more common ground is found and they become more routinized, so it will be important to continue to examine these issues in the years to come.

In keeping with this theme, this article also found important eff ects from the external political environment. As one might expect, public administrators are not likely to be receptive to performance-based reforms if they perceive that the information and data generated by such reforms are likely to be used for political rather than substantive

purposes. Unfortunately, existing research sug-gests that creating forums and environments in which performance information is likely to be taken seriously and not abused for political purposes will be a diffi cult task (Moynihan 2008; Sabatier and Jenkins-Smith 1993). One hypothesis is that the creation of formalized funding policies (such as performance-funding

to perceptions of the extent to which performance should govern funding levels. A one-standard-deviation increase in institutional graduation rates (14.66) is associated with a 0.43-point increase in perceptions of the extent to which performance should be important. Conversely, a one-standard-deviation increase in the percentage of institutional funding that comes from state appropriations is associ-ated with 0.51-point decrease in support for performance-oriented funding. I do not fi nd statistically signifi cant relationships for institutional size, mission, or student body demographics.

Th ere are also a few notable fi ndings for leadership characteristics and internal beliefs. Most important, I fi nd that political conserva-tism is positively related to perceptions of the

Table 3 How Much Should the Amount of Funding That Your Institution Receives in State Appropriations Depend on Performance?

(1) (2) (3)

Experiences with Performance-FundingFunding does depend on performance 0.315*** 0.329***

(4.29) (4.24)Performance-funding policy 0.117 –0.362

(0.22) (–0.71)External Political and Fiscal EnvironmentDysfunctional use of performance informa-

tion–0.449* –0.349* –0.441*

(–2.62) (–2.06) (–2.56)% of legislators Democrats 0.032* 0.027+ 0.031*

(2.22) (1.76) (2.17)Statewide appropriations to higher educa-

tion per FTE (constant $1,000s) 0.098 0.101 0.075

(0.92) (0.79) (0.71)Δ in statewide appropriations to higher

education per FTE (constant $1,000s)0.001 0.000 0.000

(1.14) (0.84) (0.93)

Organizational Context and PerformanceGraduation rates (latest available) 0.031* 0.020 0.029*

(2.34) (1.17) (2.06)Performance trend (fi ve-year change in

graduation rate)0.006 0.008+ 0.005

(1.55) (1.97) (1.45)Research (Carnegie) –0.363 –0.150 –0.304

(–0.53) (–0.20) (–0.43)Total enrollment (1,000s) 0.003 0.008 0.003

(0.12) (0.25) (0.11)% minority students 0.011 0.009 0.010

(1.15) (0.95) (1.04)% funding from state appropriations –0.072* –0.068* –0.066*

(–2.30) (–2.07) (–2.05)Leadership Values and DemographicsPolitical conservatism 0.840*** 0.860*** 0.854***

(4.28) (4.14) (4.38)White 1.300+ 1.084 1.254+

(1.88) (1.64) (1.84)Male 0.284 0.283 0.290

(0.69) (0.60) (0.72)Experience (years at current university) –0.028 –0.053 –0.025

(–0.71) (–1.41) (–0.63)

Constant 1.203 2.381 1.271(0.73) (1.30) (0.78)

Observations 110 110 110Adjusted R2 0.327 0.218 0.324

Note: t-statistics in parentheses.+p < .10; *p < .05; **p < .01; ***p < .001.

Political conservatism is positively related to perceptions of the importance that perform-

ance should play in guiding institutional funding.

Page 11: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 771

the individual policies that have been adopted and implemented than problems with the basic logic of performance-based accountabil-ity. It is important to note that as states (and perhaps the federal government) experiment with these types of policies in the future, they may be able to learn lessons from previous failures in performance management, which could result in more eff ective accountability mechanisms moving forward.

Finally, while this article focused extensively on understanding leadership perceptions regarding performance regimes, it is impor-tant to note that leadership commitment alone is not likely to be suffi cient to secure buy-in from lower-level employees. One promis-ing area for future research to explore relates to the mechanisms that allow leaders to convince other administrative actors to take these accountability eff orts seriously (Dull 2009; Moynihan and Lavertu 2012; Sanger 2008). In particular, it is important to understand how the importance of leadership commitment might vary across organizational and policy contexts. For example, universities have often been characterized as loosely coupled systems in which leaders have less control over employees than is the case in other organiza-tions (Cohen and March 1986). Th us, it may be that leadership commitment is even more important in other policy contexts. As the growing literature on performance management continues to explore causal mechanisms on management and performance, com-parative studies that exploit variation in both organizational design and policy context are vital to push theory forward.

Performance-based funding reforms have become incredibly popular in recent years, but there has been remarkably little scholarly atten-tion to questions about managerial perceptions of these eff orts. Th is study found that administrative perceptions of performance-based regimes are driven by a variety of factors, including both pragmatic concerns and ideological values. In doing so, it also uncovered a number of potential shortcomings with existing performance-funding policy eff orts, and suggests that while administrators are relatively open to the idea of performance-based reforms, in theory, that they remain skeptical about their implementation in practice.

Notes1. To help ensure that responses were as accurate as possible, presidents were

advised that their answers would remain confi dential.2. Th is study excludes public institutions located in Washington, D.C., as well as

military academies.3. To assess potential threats to external validity posed by nonresponse bias, I ana-

lyzed respondent characteristics across a wide variety of institutional characteris-tics that are often viewed as important within the literature on higher education and found them to be generally representative of the population of institutions that were surveyed (see the appendix).

4. Th e other prominent metric of undergraduate outcomes utilized in performance-funding policies is the freshman to sophomore year retention rate, which is highly correlated with graduation rates (r =.815). I tested alternative models with the retention rate specifi ed as the metric of organizational performance, and the results are consistent with those for presented.

5. I also ran models with this variable measured as a factor score and principal component, and the results are substantively identical. I report the index in the analysis for greater ease of interpretation.

policies in higher education) could impact the extent to which information is used dysfunc-tionally versus instrumentally. Th is, however, does not appear to be the case with the data I collected. I found no statistically signifi cant diff erences in the levels of dysfunction ascribed to performance information use between respondents in performance-funding states versus those in non-performance-funding states or in states with older programs as compared to newer programs. Th us, a more thorough examination of the causal factors that can create well-functioning performance-based regimes, although beyond the scope of this project, would be a valuable topic for future research. Such an investigation would likely center on a combination of variables related to institutional design and structure (e.g., legislative professionalism and analytical capac-ity, term limits for political actors, the design of the governing or coordinating agency), the extent to which political actors are highly polarized, the design and implementation of data reporting mecha-nisms, and the overall eff ectiveness of state government.

Perhaps the most important fi nding from this article is that performance-funding policies are not associated with favorable managerial preferences for performance-based funding. More specifi cally, I fi nd that performance-funding policies are positively related to perceptions of how much performance matters for current budgets, but they are not related to greater acceptance of perform-ance as a basis for funding. Th e fact that performance-funding policies not only have been largely been ineff ective at shaping objec-tive budgetary incentives (Rabovsky 2012) but also are associated with lower levels of support for performance-based approaches to funding suggests that administrators have often reacted negatively to them (Dougherty, Natow, and Blanca 2012; Dougherty and Reddy 2013), not because they are opposed to performance management in principle but rather because they perceive the policies as ineff ec-tive and perhaps harmful.

Further, while this article focused primarily on state-oriented performance-funding policies, these fi ndings also have implications for recent discussions about proposed federal reforms, such as the Obama administration’s eff orts to tie federal funding and other fi nancial rewards to institutional performance on metrics such as graduation rates and student loan hardship (Rodriguez and Kelly 2014). Indeed, there has already been considerable pushback from many campus leaders about the president’s proposed ratings system (Lederman, Stratford, and Jaschik 2014), which may be explained, in part, by perceptions and experiences related to state performance-funding eff orts.

It remains unclear, however, whether this disconnect is the result of policy design, the lack of incentives for improved performance, or the adversarial nature in which many of these policies have been adopted and imposed on institutions. I fi nd some evidence that university presidents not only are open to the idea of performance-based funding but also, when they perceive that their funding actually depends on organizational performance, they become more comfortable with the idea of further movement toward performance-based accountability.7 Th is suggests that the failure of performance funding in higher education may have more to do with

While performance regimes are often promoted as an apolitical,

value-neutral reform, these results suggest that such claims

should be approached with considerable skepticism.

Page 12: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

772 Public Administration Review • November | December 2014

6. I also tested for a potential interaction eff ect between the ideology of the univer-sity president and state partisanship, but the result was statistically insignifi cant.

7. It should be noted that there are potential endogeneity concerns here. If univer-sity presidents who favor or oppose performance funding are capable of exerting infl uence on the policy process in a way that brings their preferences into action, then the degree to which funding depends on performance will be, in part, a result of their preference for performance-based funding. I should note, however, that if this endogenous relationship were driving the results, we should expect to see it show up in both the performance-funding policy variable and the percep-tual measure of importance of performance for funding. Th e fact that there is no consistent relationship (performance funding is statistically insignifi cant) suggests that endogeneity is not overly problematic. Nevertheless, this fi nding should be interpreted with some caution given some of these methodological issues.

ReferencesAldeman, Chad, and Kevin Carey. 2009. Ready to Assemble: Grading State Higher

Education Accountability Systems. Washington, DC: Education Sector.Archibald, Robert B., and David H. Feldman. 2006. State Higher Education

Spending and the Tax Revolt. Journal of Higher Education 77(4): 618–44.———. 2008a. Explaining Increases in Higher Education Costs. Journal of Higher

Education 79(3): 268–95.———. 2008b. Graduation Rates and Accountability: Regressions versus Production

Frontiers. Research in Higher Education 49(1): 80–100.Behn, Robert D. 2003. Why Measure Performance? Diff erent Purposes Require

Diff erent Measures. Public Administration Review 63(5): 586–606.———. 2006. Th e Varieties of CitiStat. Public Administration Review 66(3): 332–40.Bertelli, Anthony M., and Christian R. Grose. 2011. Th e Lengthened Shadow

of Another Institution? Ideal Point Estimates for the Executive Branch and Congress. American Journal of Political Science 55(4): 767–81.

Bertelli, Anthony M., and Laurence E. Lynn, Jr. 2006. Madison’s Managers: Public Administration and the Constitution. Baltimore: Johns Hopkins University Press.

Bevan, Gwyn, and Christopher Hood. 2006. What’s Measured Is What Matters: Targets and Gaming in the English Public Health Care System. Public Administration 84(3): 517–38.

Bohte, John, and Kenneth J. Meier. 2000. Goal Displacement: Assessing the Motivation for Organizational Cheating. Public Administration Review 60(2): 173–82.

Bozeman, Barry. 1987. All Organizations Are Public: Bridging Public and Private Organizational Th eories. San Francisco: Jossey-Bass.

Broadnax, Walter D., and Kevin J. Conway. 2001. Th e Social Security Administration and Performance Management. In Quicker, Better, Cheaper? Managing Performance in American Government, edited by Dan Forsythe, 143–75. Albany, NY: Rockefeller Institute Press.

Brudney, Jeff rey L., F. Ted Hebert, and Deil S. Wright. 1999. Reinventing Government in the American States: Measuring and Explaining Administrative Reform. Public Administration Review 59(1): 19–30.

Burke, Joseph C. 2005. Achieving Accountability in Higher Education: Balancing Public, Academic, and Market Demands. San Francisco: Jossey-Bass.

Burke, Joseph C., and Henrik P. Minassians. 2003. Performance Reporting: “Real” Accountability or Accountability “Lite”? Albany, NY: Rockefeller Institute Press.

Carey, Kevin. 2008. Graduation Rate Watch: Making Minority Student Success a Priority. Washington, DC: Education Sector.

Clinton, Joshua D., Anthony Bertelli, Christian R. Grose, David E. Lewis, and David C. Nixon. 2012. Separated Powers in the United States: Th e Ideology of Agencies, Presidents, and Congress. American Journal of Political Science 56(2): 341–54.

Clinton, Joshua D., and David E. Lewis. 2008. Expert Opinion, Agency Characteristics, and Agency Preferences. Political Analysis 16(1): 3–20.

Cohen, Michael D., and James G. March. 1986. Leadership and Ambiguity: Th e American College President. 2nd ed. Cambridge, MA: Harvard Business Press.

Complete College America. 2010. Th e Path Forward. http://www.completecollege.org/path_forward/ [accessed December 17, 2010].

Dee, Th omas S., Brian Jacob, and Nathaniel L. Schwartz. 2013. Th e Eff ects of NCLB on School Resources and Practices. Educational Evaluation and Policy Analysis 35(2): 252–79.

Delta Cost Project. 2012. College Spending in a Turbulent Decade: Findings From the Delta Cost Project. Washington, DC: Delta Cost Project at American Institutes for Research.

Dougherty, Kevin J., Rebecca S. Natow, and Vega Blanca. 2012. Popular but Unstable: Explaining Why State Performance Funding Systems in the United States Often Do Not Persist. Teachers College Record 114(3): 1–41.

Dougherty, Kevin J., and Vikash Reddy. 2011. Th e Impacts of State Performance Funding Systems on Higher Education Institutions: Research Literature Review and Policy Recommendations. Working Paper no. 37, Community College Research Center, Teachers College, Columbia University. http://ccrc.tc.columbia.edu/publications/impacts-state-performance-funding.html [accessed August 6, 2014].

———. 2013. Performance Funding for Higher Education: What Are the Mechanisms? What Are the Impacts? ASHE Higher Education Report 39(2). Hoboken, NJ: Wiley.

Dougherty, Kevin J., and Monica Reid. 2007. Fifty States of Achieving the Dream: State Policies to Enhance Access to Success in Community Colleges across the United States. New York: Community College Research Center, Teachers College, Columbia University.

Dull, Matthew. 2009. Results-Model Reform Leadership: Questions of Credible Commitment. Journal of Public Administration Research and Th eory 19(2): 255–84.

Ehrenberg, Ronald G., ed. 2006. What’s Happening to Public Education? Westport, CT: Praeger.

Finer, Herman. 1941. Administrative Responsibility in Democratic Government. Public Administration Review 1(4): 335–50.

Franklin, Aimee L. 2000. An Examination of Bureaucratic Reactions to Institutional Controls. Public Performance and Management Review 24(1): 8–21.

Friedrich, Carl J. 1940. Public Policy and the Nature of Administrative Responsibility. In Public Policy, edited by Carl J. Friedrich and Edward S. Mason, 3–24. Cambridge, MA: Harvard University Press.

Fryar, Alisa Hicklin. 2011. Th e Disparate Impacts of Accountability—Searching for Causal Mechanisms. Paper Presented at the 11th Public Management Research Association Conference, Syracuse, NY.

Gillen, Andrew. 2013. Selling Students Short: Declining Teaching Loads at Colleges and Universities. Washington, DC: Education Sector.

Gilmour, John B., and David E. Lewis. 2006. Does Performance Budgeting Work? An Examination of the Offi ce of Management and Budget’s PART Scores. Public Administration Review 66(5): 742–52.

Herbst, Marcel. 2007. Financing Public Universities: Th e Case of Performance Funding. Dordrecht, Netherlands: Springer.

Hicklin, Alisa K., and Kenneth J. Meier. 2008. Race, Structure, and State Governments: Th e Politics of Higher Education Diversity. Journal of Politics 70(03): 851–60.

Hou, Yilin, Robin S. Lunsford, Katy C. Sides, and Kelsey A. Jones. 2011. State Performance-Based Budgeting in Boom and Bust Years: An Analytical Framework and Survey of the States. Public Administration Review 71(3): 370–88.

Immerwahr, John, Jean Johnson, and Paul Gasbarra. 2008. Th e Iron Triangle: College Presidents Talk about Costs, Access, and Quality. Report no. 08-2, National Center for Public Policy and Higher Education. http://www.highereducation.org/reports/iron_triangle/IronTriangle.pdf [accessed August 6, 2014].

Joyce, Phillip G., and Susan S. Th ompkins. 2002. Managing for Results in State Government: Evaluating a Decade of Reform. In Meeting the Challenges of Performance-Oriented Government, edited by Kathryn Newcomer, Cheryl Boom

Page 13: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments 773

Piotrowski, Suzanne J., and David H. Rosenbloom. 2002. Nonmission-Based Values in Results-Oriented Public Management: Th e Case of Freedom of Information. Public Administration Review 62(6): 643–57.

Poister, Th eodore H., Obed Q. Pasha, and Lauren Hamilton Edwards. 2013. Does Performance Management Lead to Better Outcomes? Evidence from the U.S. Public Transit Industry. Public Administration Review 73(4): 625–36.

Rabovsky, Th omas M. 2012. Accountability in Higher Education: Exploring Impacts on State Budgets and Institutional Spending Patterns. Journal of Public Administration Research and Th eory 22(4): 675–700.

———. 2014. Using Data to Manage for Performance at Public Universities. Public Administration Review 74(2): 260–72.

Radin, Beryl A. 2006. Challenging the Performance Movement: Accountability, Complexity, and Democratic Values. Washington, DC: Georgetown University Press.

Ravitch, Diane. 2010. Th e Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. New York: Basic Books.

Roch, Christine H., and David W. Pitts. 2012. Diff ering Eff ects of Representative Bureaucracy in Charter Schools and Traditional Public Schools. American Review of Public Administration 42(3): 282–302.

Rodriguez, Awilda, and Andrew P. Kelly. 2014. Access, Aff ordability, and Success: How Do America’s Colleges Fare and What Could It Mean for the President’s Ratings Plan? Washington, DC: American Enterprise Institute.

Sabatier, Paul A., and Hank C. Jenkins-Smith. 1993. Policy Change and Learning: An Advocacy Coalition Approach. Boulder, CO: Westview Press.

Sanford, Th omas, and James M. Hunter. 2010. Impact of Performance Funding on Retention and Graduation Rates. Paper Presented at the Association for the Study of Higher Education Conference, Indianapolis, IN.

Sanger, Mary Bryna. 2008. Getting to the Roots of Change: Performance Management and Organizational Culture. Public Performance and Management Review 31(4): 621–53.

Selden, Sally Coleman. 1997. Th e Promise of Representative Bureaucracy: Diversity and Responsiveness in a Government Agency. Armonk, NY: M. E. Sharpe.

Shin, Jung-cheol. 2010. Impacts of Performance-Based Accountability on Institutional Performance in the U.S. Higher Education 60(1): 47–68.

Smith, Dennis C., and William J. Bratton. 2001. Performance Management in New York City: CompStat and the Revolution in Police Management. In Quicker, Better, Cheaper? Managing Performance in American Government, edited by Dan Forsythe, 453–82. Albany, NY: Rockefeller Institute Press.

Sowa, Jessica E., and Sally Coleman Selden. 2003. Administrative Discretion and Active Representation: An Expansion of the Th eory of Representative Bureaucracy. Public Administration Review 63(6): 700–710.

Tandberg, David. 2009. Interest Groups and Governmental Institutions: Th e Politics of State Funding of Public Higher Education. Educational Policy 24(5): 735–78.

Th omas, Virginia L. 2001. Restoring Government Integrity through Performance, Results, and Accountability. In Quicker, Better, Cheaper? Managing Performance in American Government, edited by Dan Forsythe, 113–42. Albany, NY: Rockefeller Institute Press.

Th urmaier, Kurt M., and Katherine G. Willoughby. 2001. Policy and Politics in State Budgeting. Armonk, NY: M. E. Sharpe.

Titus, Marvin A. 2006. No College Student Left Behind: Th e Infl uence of Financial Aspects of a State’s Higher Education Policy on College Completion. Review of Higher Education 29(3): 293–317.

U.S. General Accounting Offi ce (GAO). 2005a. Performance Budgeting: Eff orts to Restructure Budgets to Better Align Resources with Performance. Washington, DC: U.S. Government Printing Offi ce. GAO-05-117SP.

———. 2005b. Performance Budgeting: State Experiences and Implications for the Federal Government. Washington, DC: U.S. Government Printing Offi ce. AFMD-93-41.

Jennings, and Allen Lomax, 61–96. Washington, DC: American Society for Public Administration.

Julnes, Patria De Lancer. 2008. Performance Measurement: Beyond Instrumental Use. In Performance Information in the Public Sector: How It Is Used, edited by Wouter Van Dooren and Steven Van de Walle, 58–71. Basingstoke, UK: Palgrave Macmillan.

Keiser, Lael R., Vicky M. Wilkins, Kenneth J. Meier, and Catherine A. Holland. 2002. Lipstick and Logarithms: Gender, Institutional Context, and Representative Bureaucracy. American Political Science Review 96(3): 553–64.

Kelly, Andrew P., and Mark Schneider, eds. 2012. Getting to Graduation: Th e Completion Agenda in Higher Education. Baltimore: Johns Hopkins University Press.

Klarner, Carl. 2012. Measures of Partisan Balances of State Government. http://www.indstate.edu/polisci/klarnerpolitics.htm [accessed August 6, 2014].

Koppell, Jonathan G. S. 2005. Pathologies of Accountability: ICANN and the Challenge of “Multiple Accountabilities Disorder.” Public Administration Review 65(1): 94–108.

Kroll, Alexander. 2013. Th e Other Type of Performance Information: Nonroutine Feedback, Its Relevance and Use. Public Administration Review 73(2): 265–76.

Lederman, Doug, Michael Stratford, and Scott Jaschik. 2014. Colleges and Analysts Respond to Obama Ratings Proposal. Inside Higher Education, February 7. http://www.insidehighered.com/news/2014/02/07/colleges-and-analysts-respond-obama-ratings-proposal [accessed August 6, 2014].

Lewin, Tamar. 2013. Obama’s Plan Aims to Lower Cost of College. New York Times, August 22.

McLendon, Michael K., James C. Hearn, and Steven B. Deaton. 2006. Called to Account: Analyzing the Origins and Spread of State Performance-Accountability Policies for Higher Education. Educational Evaluation and Policy Analysis 28(1): 1–24.

Meier, Kenneth J. 2009. Policy Th eory, Policy Th eory Everywhere: Ravings of a Deranged Policy Scholar. Policy Studies Journal 37(1): 5–11.

Meier, Kenneth J., and Laurence J. O’Toole, Jr. 2006. Bureaucracy in a Democratic State: A Governance Perspective. Baltimore: Johns Hopkins University Press.

———. 2013. Subjective Organizational Performance and Measurement Error: Common Source Bias and Spurious Relationships. Journal of Public Administration Research and Th eory 23(2): 429–56.

Meier, Kenneth J., and Joseph Stewart. 1991. Th e Politics of Hispanic Education: Un paso pa’lante y dos pa’tras. Albany: State University of New York Press.

———. 1992. Th e Impact of Representative Bureaucracies: Educational Systems and Public Policies. American Review of Public Administration 22(3): 157–71.

Moynihan, Donald P. 2008. Th e Dynamics of Performance Management: Constructing Information and Reform. Washington, DC: Georgetown University Press.

———. 2010. Performance-Based Bureaucracy. In Th e Oxford Handbook of American Bureaucracy, edited by Robert F. Durant, 278–302. New York: Oxford University Press.

Moynihan, Donald P., and Stéphane Lavertu. 2012. Does Involvement in Performance Management Routines Encourage Performance Information Use? Evaluating GPRA and PART. Public Administration Review 72(4): 592–602.

Moynihan, Donald P., and Sanjay K. Pandey. 2010. Th e Big Question for Performance Management: Why Do Managers Use Performance Information? Journal of Public Administration Research and Th eory 20(4): 849–66.

Nicholson-Crotty, Jill, Jason A. Grissom, and Sean Nicholson-Crotty. 2011. Bureaucratic Representation, Distributional Equity, and Democratic Values in the Administration of Public Programs. Journal of Politics 73(02): 582–96.

Niskanen, William A., Jr. 1971. Bureaucracy and Representative Government. Chicago: Aldine, Atherton.

Perry, James L., and Lois Recascino Wise. 1990. Th e Motivational Bases of Public Service. Public Administration Review 50(3): 367–73.

Page 14: Support for Performance-Based Funding: The Role of Political Ideology, Performance, and Dysfunctional Information Environments

774 Public Administration Review • November | December 2014

Weisbrod, Burton A., Jeffrey P. Ballou, and Evelyn D. Asch. 2008. Mission and Money: Understanding the University. New York: Cambridge University Press.

Wilson, James Q. 1989. Bureaucracy: What Government Agencies Do and Why Th ey Do It. New York: Basic Books.

Zhang, Liang. 2009. Does State Funding Aff ect Graduation Rates at Public Four-Year Colleges and Universities? Educational Policy 23(5): 714–31.

Zumeta, William. 2001. Public Policy and Accountability in Higher Education: Lessons from the Past and Present for the New Millennium. In Th e States and Public Higher Education Policy: Aff ordability, Access, and Accountability, edited by Donald E. Heller, 155–97. Baltimore: Johns Hopkins University Press.

Zumeta, William, David W. Breneman, Patrick M. Callan, and Joni E. Finney. 2012. Financing American Higher Education in the Era of Globalization. Cambridge, MA: Harvard Education Press.

Van de Walle, Steven, and Tony Bovaird. 2007. Making Better Use of Information to Drive Improvement in Local Public Services: A Report for the Audit Commission. Birmingham, UK: School of Public Policy, University of Birmingham.

Volkwein, J. Fredericks, and David Tandberg. 2008. Measuring Up: Examining the Connections among State Structural Characteristics, Regulatory Practices, and Performance. Research in Higher Education 49(2): 180–97.

Walker, Richard M., Chan Su Jung, and George A. Boyne. 2013. Marching to Diff erent Drummers? Th e Performance Eff ects of Alignment between Political and Managerial Perceptions of Performance Management. Public Administration Review 73(6): 833–44.

Watanabe, Maika. 2007. Displaced Teacher and State Priorities in a High-Stakes Accountability Context. Educational Policy 21(2): 311–68.

Weerts, David J., and Justin M. Ronca. 2012. Understanding Diff erences in State Support for Higher Education across States, Sectors, and Institutions: A Longitudinal Study. Journal of Higher Education 83(2): 155–85.

Table A1 Comparison of Survey Respondents and Nonrespondents

Respondents

Respondents after Listwise Deletion Nonrespondents

Average enrollment 11,957 12,091 13,499Average freshmen SAT/ACT

scores1034 1030 1035

Median total revenues (in millions)

$108 $167.01 $183.66

Average % minority students 20.34% 20.35% 25.06%% research universities 28.99% 26.55% 29.77%% master’s universities 47.1% 53% 45.1%% bachelor’s universities 23.9% 20.4% 25.1%Average % funding from state

appropriations25.55% 25.66% 25.58%

% in states with performance funding

26.09% 22.12% 24.42%

Share of Institutions in Se-lected Regions of Country

Far West (AK, CA, HI, NV, OR, WA)

9 (6.5%) 8 (7.1%) 47 (10.9%)

Great Lakes (IL, IN, MI, OH, WI) 21 (15.2%) 18 (15.9%) 60 (14%)Mid-Atlantic (DC, DE, MD, NJ,

NY, PA)18 (13.%) 11 (9.7%) 89 (20.7%)

New England (CT, MA, ME, NH, RI, VT)

7 (5.1%) 6 (5.3%) 29 (6.7%)

Plains (IA, KS, MN, MO, ND, NE, SD)

18 (13.0%) 15 (13.2%) 33 (7.7%)

Rocky Mountains (CO, ID, MT, UT, WY)

8 (5.8%) 5 (4.4%) 22 (5.1%)

Southeast (AL, AR, FL, GA, KY, LA, MS, NC, SC, TN, VA, WV)

41 (29.7%) 35 (31%) 109 (25.4%)

Southwest (AZ, NM, OK, TX) 16 (11.6%) 15 (13.3%) 41 (9.5%)Total Number of Universities 138 113 430

Table A2 Dysfunctional Use of Performance Information

Now please think about the role that performance information plays in higher education policy making in your state, and indicate whether you agree or disagree with the following statements.

Item (1 = strongly disagree, 7 = strongly agree) Mean Value

If people want to, they can manipulate performance data to make it say whatever they want.

4.63

Performance data is used more for political posturing than it is for objectively assessing institutional productivity.

4.60

I worry that performance data will be used to unfairly punish my institution.

4.31

Appendix