14
The Sustainable Campus Observatory: A Comprehensive Framework for Benchmarking University Performance Toward Sustainability Mathias Bouckaert Abstract This paper aims to address the lack of global assessment approaches for benchmarking the sustainable performance of higher education institutions. Such approaches are critical for universities wishing to analyse and improve their performance through the identication of best practices among their peers. Over the past several years, comparative assessment has become a central concern in the academe, especially with the introduction of global rankings. Beside rankings, two other broad categories of assessment can be differentiated: specic research projects and sustainability assessment frameworks. In a rst step, a comparison of the 3 identied categories is undertaken on the basis of four criteria for success: relevance, measurability, holism and digestibility. We observe that while the three types of methods differ in their strengths, none of them manages to meet all the criteria. In a second step, we introduce a new assessment tool: the Sustainable Campus Observatory. Built on three pillars, i.e. institutions missions, operations and external outreach, it proposes a benchmarking solution to university stakeholders wishing to identify areas for improvement. Lying at the intersection of existing evaluation practices, the Observatory integrates the multiple dimensions of sustainability in HEIs through a comprehensive framework that ensures intelligibility and comparability of results. Keywords Assessment Benchmarking Sustainable development University Performance M. Bouckaert (&) Sustainable Campus Chair, REEDS Laboratory, University of Versailles Saint-Quentin-en- Yvelines, 5-7 Bd. DAlembert, 78280 Guyancourt, France e-mail: [email protected] © Springer International Publishing Switzerland 2015 W. Leal Filho et al. (eds.), Integrative Approaches to Sustainable Development at University Level, World Sustainability Series, DOI 10.1007/978-3-319-10690-8_26 371

Sustainable Campus Observatory

Embed Size (px)

DESCRIPTION

održivo sveučilište

Citation preview

Page 1: Sustainable Campus Observatory

The Sustainable Campus Observatory:A Comprehensive Framework forBenchmarking University PerformanceToward Sustainability

Mathias Bouckaert

Abstract

This paper aims to address the lack of global assessment approaches forbenchmarking the sustainable performance of higher education institutions. Suchapproaches are critical for universities wishing to analyse and improve theirperformance through the identi!cation of best practices among their peers. Overthe past several years, comparative assessment has become a central concern inthe academe, especially with the introduction of global rankings. Beside rankings,two other broad categories of assessment can be differentiated: speci!c researchprojects and sustainability assessment frameworks. In a !rst step, a comparison ofthe 3 identi!ed categories is undertaken on the basis of four criteria for success:relevance, measurability, holism and digestibility. We observe that while thethree types of methods differ in their strengths, none of them manages to meet allthe criteria. In a second step, we introduce a new assessment tool: the SustainableCampus Observatory. Built on three pillars, i.e. institution’s missions, operationsand external outreach, it proposes a benchmarking solution to universitystakeholders wishing to identify areas for improvement. Lying at the intersectionof existing evaluation practices, the Observatory integrates the multipledimensions of sustainability in HEIs through a comprehensive framework thatensures intelligibility and comparability of results.

Keywords

Assessment ! Benchmarking ! Sustainable development ! University !Performance

M. Bouckaert (&)Sustainable Campus Chair, REEDS Laboratory, University of Versailles Saint-Quentin-en-Yvelines, 5-7 Bd. D’Alembert, 78280 Guyancourt, Francee-mail: [email protected]

© Springer International Publishing Switzerland 2015W. Leal Filho et al. (eds.), Integrative Approaches to Sustainable Development atUniversity Level, World Sustainability Series, DOI 10.1007/978-3-319-10690-8_26

371

Page 2: Sustainable Campus Observatory

1 Introduction

The imperative to establish sustainability as a guiding principle in the managementof our societies is now widely acknowledged (IPCC 2013). In this regard, highereducation institutions (HEIs) play a leading role due to their collective nature alongwith their position at the forefront of knowledge. Their missions of education andresearch are driving forces of tomorrow’s society and their operations must meet thehighest requirements of exemplarity.

In this context, evaluation is key for any institution seeking to understand its pastperformances and to correct possible de!ciencies. Along with a clear communi-cation of results, it also enables them to compare their performances with those oftheir peers and identify ways for improvement.

The assessment of university performances towards sustainable developmentgave rise to a wide range of research. Yet, literature remains weak because of a lackof baselines as of a solid common methodological framework (Wright 2002;Corcoran et al. 2004; Swearingen White 2009; Vaughter et al. 2013).

Developing a universal and concrete action plan for HEIs to reach sustainabilityis a complex task. The majority of proposals emphasize the necessity to be mul-tidimensional, trans-disciplinary, based on long-term perspectives and gather allstakeholders to co-construct adapted solutions (Yarime and Tanaka 2012). Inaddition, due to its integrated nature, sustainability must be in line with HEIs coremissions of education and research as with their own imperatives of excellence and!nancial health (Hua 2013).

Existing tools developed to measure university performance in sustainabilityoften fail to fully address these issues for different reasons. Firstly, they lack tocapture the multidimensional nature of a sustainable HEI by focusing too narrowlyon its environmental externalities (Vaughter et al. 2013; Yarime and Tanaka 2012).Secondly, they hardly provide mechanisms for cross-institutional comparisonbecause they rely on a high number of indicators, often qualitative (Shriberg 2002;Karatzoglou 2012). Finally, current assessment tools do not address suf!ciently thesystemic essence of sustainability (Shriberg 2002), nor do they take into account theown interests and core missions of universities that are yet at the heart of anysustainable institution.

There is therefore a need for the improvement of assessment methods. Thispaper seeks to contribute to this process. It is structured around two main sections.A !rst part will analyse the different types of tools developed to assess universityperformance. They will ultimately be reviewed with regards to four criteria: rele-vance, measurability, holism and digestibility. The second part will introduce a newframework, the Sustainable Campus Observatory: a benchmarking tool designed toprovide comparative assessments of sustainable institutions on the basis of acomprehensive and multidimensional perspective.

372 M. Bouckaert

Page 3: Sustainable Campus Observatory

2 Towards a Typology of HEIs External AssessmentMethods

Evaluation is needed for every organization implementing actions in order to reacha goal. This re"exivity allows for analysis and corrections of past operations toreach better future performance. When combined with effective communication,external assessment also provides valuable information on the performances ofsimilar institutions.

This section aims at reviewing the various methods that have been developed tomeasure university performances. Due to the large variety of approaches we willfocus on external assessments, thereby excluding internal management and moni-toring systems as well as political evaluations. The methods considered are groupedinto three main categories according to their respective purposes and methodolo-gies: global rankings, speci!c research assessments and sustainability assessmenttools. Each of these categories is brie"y described below.

2.1 General Rankings

General academic rankings appeared in the beginning of the years 2000. The mostfamous ones are the Shanghai Academic Ranking of World Universities (ARWU),the Times Higher Education World University Ranking (THE) and the QS WorldUniversity Ranking (QS). Every year, ranking organizations develop a classi!cationof “world’s best universities” based on several indicators that never exceed 13 asfor the THE ranking. Each set is then compiled into a single composite indicatorforming the !nal score assigned to each institution.

The ARWU ranking is often regarded as the most consistent ranking among thosethree (Altbach 2010). Methodologically, it measures mainly research performancethrough quantitative indicators. The other two rankings (THE and QS) are regardedas more subjective because of their reliance on reputation indicators. Nevertheless,they investigate beyond research performance by incorporating other criteria suchas internationalization or teaching quality. Table 1 shows the relative weightsassigned to the different criteria in each of the three rankings.

The introduction of international university rankings gave rise to a lot of con-troversies with criticisms questioning their validity (Billaut et al. 2010; Enserink2007; Florian 2007).

Regarding assessment methodology, rankings are accused of favouring researchover other university missions. As shown in Table 1, it represents on average nearlyhalf of an institution’s score while teaching never accounts for more than 20 % of!nal scores in the three tools considered.

Because of the indicators used to measure research performance, global rankingsare also charged with favouring English language publications as well as naturalsciences at the expense of humanities and social sciences (Rauhvargers 2013).

The Sustainable Campus Observatory … 373

Page 4: Sustainable Campus Observatory

Indicators selected to assess teaching quality are also limited (Altbach 2010).The ARWU ranking measures teaching through the number of Nobel Price or FieldsMedal winners among alumni. Considering THE, more than half of its teachingscore is based on doctoral education. One might wonder about the relevance ofusing such indicators to measure teaching performances since they seem to befurther related to research activities.

Alongside methodological weaknesses, a second point of controversy concern-ing global rankings regards their impact. The introduction of global rankings wasfollowed by extensive media coverage and increased consideration from academicsand government of!cials. In universities, these have led to changes in the actionsand objectives of various stakeholders in order to reach better positions in rankings.New phenomena appeared such as cross-citations between researchers or multiplepublications of a single scienti!c contribution with only few changes at the margin.Such practices harm the evolution of science by removing the value of the citationindex as an indicator of quality and by "ooding the literature with worthless rep-etitions—a problem we could refer to as “infobesity”.

Several recent initiatives have emerged in response to these problems. Amongthem we can highlight the U-Multirank project (Van Vught and Ziegele 2012). Itsoriginality is to be much more comprehensive than existing rankings by taking intoaccount a larger amount of assessment criteria. It does not provide any aggregated!nal score but results in multidimensional pro!les developed for each institution.

The main contribution of the U-Multirank project relies in its exhaustiveness andappropriate response to various stakeholders, especially students. However, theabsence of a single !nal score may be a threat for its media outreach compared toexisting rankings that have the advantage of being highly readable.

Table 1 Relative weights of speci!c dimensions in global university rankings (in % of !nalscore)

ARWUa (%) The ranking (%) QS ranking (%) Average (%)

Reputationb 0 33 50 27

Teaching 15 15 20 17Research 85 42 20 49Internationalisation 0 7.5 10 6

Knowledge transfer 0 2.5 0 1

RemarksaThe ARWU ranking includes an indicator of institutions effectiveness, the “Performance perCapita”, accounting for 10 % of the !nal score. It amounts to the overall score divided by thenumber of full-time-equivalent academic staff. Since this indicator covers both research andteaching its weight as been equally shared between the two dimensionsbIndicators of reputation focus on research and education. In this table they are consideredseparately to emphasize their speci!c nature. Reputation IS measured by surveys amongacademics (for THE and QS rankings) and/or private sector employers (QS Ranking)

374 M. Bouckaert

Page 5: Sustainable Campus Observatory

2.2 Specific Research Assessments

A second category of initiatives aiming at assessing HEIs performances can beidenti!ed in the !eld of highly specialized research projects. These are usuallyfocused on speci!c topics and fall within many scienti!c disciplines ranging fromexperimental psychology to public management and socio-economic analysis.

This section is not indented to provide a comprehensive state of scienti!cknowledge on HEIs assessments but rather seeks to give an overview of the widediversity of research conducted on this subject. Although projects may differ both intheir object and discipline, they all share two characteristics in common: a highdegree of specialization and scienti!c quality.

A !rst !eld of analysis worth noting is the study of teaching and learningperformances. Scienti!c disciplines covering this topic are often grouped under thename of educational sciences and gather approaches such as psychology, sociologyor neuroscience.

Educational sciences have led to the development of assessment methods tounderstand how well universities are teaching their students and, therefore, howcould they improve their activities. The PISA and AHELO studies undertaken by theOECD provide a well-known illustration of such initiatives.

The Programme for International Student Assessment, PISA, aims at assessingthe quality and ef!ciency of school systems worldwide (OECD 2010a). Although itis not targeted towards higher education, the study provides a good example ofteaching outcomes measurement.

The equivalent of PISA for higher education is rather to be sought in the AHELO—Assessment of Higher Education Learning Outcomes—Programme (OECD2010b). The initiative is still new and has not yet resulted in actual assessment. TheOECD just completed its !rst feasibility studies. Unlike PISA, AHELO does notlead to any classi!cation or ranking. Its aim is to measure what student know andcan do once they graduate by testing their generic—and sometimes speci!c—skillsin a comprehensive and comparable manner.

The tool will be of direct bene!t to universities that take part in the program.Once the assessment is completed, institutions will be provided with anonymousdata that will enable them to compare their performance against that of their peersand !nd new ways for improvement. One of AHELO’s main originality lies in itsdesign around the concept of educational processes added value. Thereby, a uni-versity welcoming B+ student and turning them into A+ graduate will be deemeddifferently than another attracting A+ students to !nally produce A+ graduates(OECD 2010b).

In addition to the measurement of teaching outcomes, universities are alsoevaluated relative to other activities. This is of course the case of research per-formance assessed through bibliometric studies with indicators of publication andcitation (Moed et al. 1985). Economic analysis has also been utilized to measureHEIs ef!ciency in their teaching and research activities with regard to their level offunding or staff (Kuah and Wong 2011; Kempkes and Pohl 2006).

The Sustainable Campus Observatory … 375

Page 6: Sustainable Campus Observatory

Besides those core missions, an increasing attention is given to the issue of HEIsexternal outreach. Research and assessments are conducted on topics such as theregional economic impact of universities (Uyarra 2008; Kelly et al. 2004) or theirsocial engagement with local communities (Hart et al. 2007).

Therewith, the economic and managerial literature has been particularly inter-ested in the study of university involvement in innovation and technologicaldevelopment. Arguing that knowledge transfer has become the third mission ofHEIs beside teaching and research (Etzkowitz et al. 2000; Wissema 2009), newkinds of assessments have emerged. They focus on issues such as business incu-bator management (Mian 1997) or university-industry collaboration (Monjon andWaelbroeck 2003).

A !nal category of HEIs performance assessments lies in research projectsfocussing on sustainable development. Those initiatives are discussed in the fol-lowing section.

2.3 Sustainability Assessment Tools

Sustainability assessment and management frameworks for HEIs appeared in theearly 1990s but really took off in the years 2000. The main purpose of these tools isto support sustainable development initiatives in universities and campuses. To thisend they provide assessment methods based on indicators that allow institutions todetect new lines of action by analysing their performance over time and withrespect to those of their peers.

In a recent review of 16 HEIs sustainability assessment tools, Yarime and Ta-naka (2012) classi!ed the whole set of indicators into !ve broad categories: gov-ernance, education, research, campus operation and outreach. They found that twocategories—governance and campus operations—gather more than 80 % of allindicators (Yarime and Tanaka 2012). Shriberg (2002) reached the same conclusionputting forward that assessment methods mainly focus on operational eco-ef!ciencywhile we need to consider sustainability across all aspects of the university.

Several key dimensions relative to universities therefore appear to be under-represented in sustainability assessments. This is particularly the case of educationand research. While there is a clear consensus on the need to take them intoaccount, the heart of the debate lies in the question of what is to be measured.

Considering education, the main dif!culty lies in de!ning the expected outcomesof education for sustainability. While existing tools measure the number of coursesoriented towards sustainability, many authors call for a greater focus on the actual“skills for sustainability” acquired through education (Vaughter et al. 2013; Daviset al. 2003; Wiek et al. 2011; Rieckmann 2012).

Similar challenges are encountered when dealing with the assessment of researchfor sustainability. Existing assessments mostly consider input indicators, i.e. facultyparticipation rate or the number of grants awarded for research projects addressingsustainable development issues. However, even if output indicators were applied,

376 M. Bouckaert

Page 7: Sustainable Campus Observatory

the lack of a common de!nition for sustainability research remains a signi!cantobstacle. We cannot be con!ned to assessing “sustainability labelled” researchprojects knowing that this amounts to reject other initiatives addressing futurechallenges that are still unknown today.

Besides the question of what is to be measured, the design of assessments toolsand processes is a second important issue. Existing tools often include more than150 indicators and rely on institutions to collect their data through self-reporting.This implies a range of dif!culties in terms of complexity for university staff incharge of the assessment, incompleteness of reports, inability to verify statementsaccuracy, incomparability of results between institutions and consequently apotential loss of credibility for assessment !nal outputs. Some authors also note thatthe complexity and time-consuming nature of assessments may cause reluctance forinstitutions to engage in such efforts (Beringer 2006; Glover et al. 2011).

As it was noted above, incomparability of !ndings reduces their ability to pro-vide a real addition to knowledge (Karatzoglou 2012). With regard to assessmenttools, comparability is not just a means to rank institutions. It is also a necessarycondition to understand their true performances and !nd areas for improvement byfacilitating the identi!cation and communication of best practices.

However, comparability is of value only if the results compared are themselvesrelevant. Rauch and Newman (2009) point out that while we need speci!c metricsand a clear idea of what is to be measured, assessment frameworks are to be kept"exible to allow for institutional differences. On the other hand, too much "exibilitycan result in complex assessments that lack comparability and have limited infor-mative value (Fonseca et al. 2011). These !ndings suggest an opposition betweencomparability and relevance when the latter requires a certain level of "exibility.

2.4 Comparison of Assessment Frameworks in the Light of FourCriteria of Success

Three assessment methods were presented above: global rankings, scienti!c studiesand sustainability assessment frameworks. Each has its own strengths and vulnera-bilities. In a study on sustainability assessment frameworks, Shriberg (2002)emphasizes !ve conditions the ideal evaluation tool should meet. It must:(1) identify important issues while being as speci!c as possible, (2) be calculable andcomparable, (3) move beyond eco-ef!ciency to consider true sustainability,(4) measure processes andmotivations and (5) stress comprehensibility. In this paper,we build on Shriberg’s conditions to highlight four criteria to evaluate the appro-priateness of an assessment tool: relevance, measurability, holism and digestibility.

Relevance implies that a tool focuses on important and actual issues to capturethe different dimensions of a sustainable institution in the most appropriate manner.Withal, this should be done wisely so as to meet the second condition of mea-surability that requires a clear and viable methodology ensuring the feasibility ofassessments and the comparability of results.

The Sustainable Campus Observatory … 377

Page 8: Sustainable Campus Observatory

The third criterion, holism, re"ects the tool’s ability to catch the comprehensiveand systemic nature of a sustainable university. The holism criteria suggest thatsustainability is more than correcting environmental externalities. It takes intoaccount every dimension of HEIs, including !nances, ef!ciency or teaching per-formances, as well as the links that bind them together.

The last criterion of success is the digestibility of an assessment and its output. Itaims at ensuring the tool’s visibility and impact. Even with the most complexmodelling framework, the !nal result must remain simple and comprehensive toattract people’s interest.

Table 2 gives a reading of the three types of assessment previously identi!edwith regard to the four success criteria. The analysis focuses on categories as awhole; it does not provide information on the individual characteristics of speci!ctools, which obviously differ within groups.

As it can be seen in Table 2, none of the tool categories succeeds in meeting thefour criteria for success at once. Considering global rankings, their strengths lie intheir clarity, concreteness and high comprehensibility. Nevertheless, due to poormethodologies, their weak performances in relevance and holism make theminadequate to properly assess what are the “best universities”.

For their part, academic research projects are most of the time relevant in theirspeci!c !elds of analysis. This degree of specialization, however, makes theinclusion of the multi-dimensional nature of HEIs dif!cult, hindering the ful!lment

Table 2 Compliance of HEIs assessment tools with the criteria of relevance, measurement,holism and digestibility

Relevance Measurability Holism Digestibility

Globalrankings(ARWU, THE& QS)

No Yes No YesSigni!cantbiases, weakmethodology

De!nite andobjectiveindicators exceptfor reputationsurveys (THE &QS)

Research oriented,favour Englishpublications &natural sciences,absence of basicdimensions

Highlyintelligible,single !nalresult

Speci!cresearchassessments

Yes Yes No NoConstant effortsto reducebiases, highconsiderationfor rigor andrelevance

High complexitybut compliancewith scienti!cprotocols andconcern forreproducibility

Often one-dimensional,highly specialized

Highcomplexity,no incentivesfordisseminationoutside theacademe

Sustainabilityassessmentframeworks

No No Yes NoBiasedindicators, inputoriented,incompletenessof reports

Self-reporting,incomparability,high complexityfor non-specialists

Multi-dimensionalalbeit undue weighton governance andeco-ef!ciency

Highcomplexity,limitedtransparency

378 M. Bouckaert

Page 9: Sustainable Campus Observatory

of the holism criteria. Furthermore, even with a high level of transparency, speci!cassessments lack visibility due to their great complexity and the absence ofincentives to spread knowledge beyond the academic sphere.

Lastly, sustainable assessments represent the most advanced category in terms ofholism but do not fully address this criterion due to an excessive focus on gover-nance and eco-ef!ciency. They often lack the minimum levels of relevance andcomprehensiveness required to measure the true sustainability of institutions.De!ciencies in measurability and visibility are due to the large number of indicatorsincluded and their reliance on universities self-reporting. Yet, such initiatives arestill young and we can expect them to evolve over time.

3 The Sustainable Campus Observatory

University assessment initiatives, whether branded as “sustainable” or not, showseveral de!ciencies. There is thus a need for the construction of new tools or theimprovement of existing ones. The Sustainable Campus Observatory seeks to bringa contribution to this process.

3.1 Context and Conceptual Background

The development of the Observatory took place at the Sustainable Campus Chair ofthe University of Versailles. The ultimate purpose of the project was to design abenchmarking tool enabling a relevant assessment and effective communication ofuniversity performances toward sustainability.

Originally developed for the improvement of industrial management processes,benchmarking is understood as “the search for those best practices that will lead tosuperior performance for a company” (Camp 1989). When applied to HEIs, we cande!ne benchmarking as a method of comparison that enables institutions to eval-uate their performances with respect to those of their peers in order to identify andlearn from best practices.

The methodology adopted for the development of the observatory was based ontwo complementary approaches: an analysis of existing assessment tools and thede!nition of the sustainable university. The resulting modelling is presentedhereafter.

3.2 Methodology

Three main areas were identi!ed for the conceptualization of institutions: universitymissions, operations and external outreach. Furthermore, the concept of sustain-ability is divided into two general principles: the responsibility of institutionstowards society and the imperative for development. Each of the broad areas isdivided into 2 sub-themes from which the key objectives are derived in relation to

The Sustainable Campus Observatory … 379

Page 10: Sustainable Campus Observatory

the assumed principles of sustainability. The global structure of the observatory isoutlined in Fig. 1. It aims at providing an exhaustive and coherent representation ofthe various dimensions of the sustainable university.

The !rst main area “Missions” covers the core activities of the university,namely teaching and research. Delivering quality education and conducting cutting-edge research is the primary purpose of any institution. In addition to being a keyrequirement for HEIs’ own long-term survival, those activities are central for thesustainability of our societies. Research is at the heart of future innovations whileeducation is probably the most powerful tool to ensure intergenerational equity andlong-term development.

The second focus of the observatory is on “Operations” and covers all thefunctions supporting the core missions of institutions. Allegorically, it can be seenas the fertile ground on which research and teaching can be carried out optimally.The operations dimension therefore brings together issues of governance, campusmanagement, student life and work conditions.

The third area considered in the observatory is “External Outreach”. It includesall the activities of the university explicitly intended to contribute to the creation ofsocial, environmental and economic value for what lies beyond its walls.

The sub-themes identi!ed in the 3 main areas are broken down into keyobjectives. In addition to being derived from the various dimensions of universities,each objective is bounded to one of the two principles of sustainability, namelyresponsibility or development. If we take the example of “Teaching”, the goal ofdelivering quality education to a large number of students (the “Quantity/Quality”objective) is in line with the development component of sustainability. On the otherhand, the second objective focuses on accessibility and aims at ensuring the rightfor everybody to receive a quality education regardless of his or her social and!nancial conditions. Hence, accessibility is the application of the responsibilityprinciple on universities’ teaching mission.

Fig. 1 General structure of the Sustainable Campus Observatory

380 M. Bouckaert

Page 11: Sustainable Campus Observatory

Each of the key objectives are then broken down into a set of indicators designedto re"ect the actual performances of institutions keeping in mind the requirementsof relevance and measurability. Thereby, performances on the teaching quality/quantity objective are determined by a range of information collected on issues suchas teaching evaluation programs, graduate employment rates, the use of digitaltechnology, or the success rate of students adjusted for selection practices.

3.3 Perspective and Expected Impact

The goal of the Sustainable Campus Observatory is to be useful for all universitystakeholders, be they students, faculties, external partners or public administrations.It also seeks to meet the four success criteria identi!ed above. Relevance andholism are addressed through a global and systemic design while indicators areselected on the basis of their realism and contextualized according to local condi-tions. Therewith, a particular focus has been put on feasibility by limiting thenumber of indicators and ensuring the greater ease for data collection. Finally, aclear and simple presentation of results from a multidimensional perspective isintended to meet the condition of digestibility.

The question of whether it can be bene!cial to provide a ranking of universitiesbased on a single composite indicator is still pending. A priori, despite the sig-ni!cant bene!t they bring in terms of comprehensibility, these practices remainhighly subjective. Nevertheless, rankings’ visibility confer them the biggest impactand could be very valuable in promoting sustainable development at universities(Fadeeva and Mochizuki 2010). The main obstacle to overcome would then be toensure the legitimacy of the method. It could be achieved with continuousimprovement based on external feedbacks or by a validation of the tool through acollaborative process gathering representatives from the entire universitycommunity.

4 Conclusion

The need to support our societies and universities towards a more sustainabledevelopment is recognized by all. Sustainability is not limited to environmentalresponsibility but incorporates social and economic dimensions on an equal footing.In order to address the challenges of tomorrow, we need new methods of man-agement and assessment based on multidimensional and transdisciplinaryperspectives.

With respect to higher education, external assessment tools were divided intothree broad categories: global rankings, speci!c research projects and sustainabilityassessment frameworks.

The Sustainable Campus Observatory … 381

Page 12: Sustainable Campus Observatory

An evaluation of the three classes of tools has been conducted on the basis offour success criteria: relevance, measurability, holism and digestibility. It followedthat while each category of tools has its own advantages, none is able to meet thefour conditions.

In response to these !ndings, we introduced a new comprehensive bench-marking tool for universities: the Sustainable Campus Observatory. It is designed tofacilitate the identi!cation of best practices for university stakeholders through theassessment and comparison of HEIs performances toward sustainability.

Its methodology relies on the principles of holism and complementarity. This hastwo main implications: !rst sustainability can only be implemented and assessedthrough the integration of all dimensions of the institution; and second a universitycan only be sustainable for society if it is already sustainable for itself.

The main contribution of this approach is its ability to capture the intercon-nectedness of all the components of the sustainable university. Consequently, itenables the identi!cation of synergies reconciling the interests of society and thoseof the institution.

Ultimately, this new scheme was designed to meet the four success criteria. Ittherefore lies at the intersection of the three categories of assessment tools identi-!ed, bringing together the relevance of academic projects, the comprehensivenessof sustainability assessment frameworks and the digestibility of global rankings.

References

Altbach PG (2010) The state of the rankings. Inside Higher Ed, 11 Nov 2010Beringer A (2006) Campus sustainability audit research in Atlantic Canada: pioneering the campus

sustainability assessment framework. Int J Sustain High Educ 7(4):437–455Billaut J-C, Bouyssou D, Vincke P (2010) Should you believe in the Shanghai Ranking?

Scientometrics 84(1):237–263Camp RC (1989) Benchmarking: the Search for industry best practices that lead to superior

performance. American Society for Quality Control (ASQC), Milwaukee, 299 pCorcoran PB, Walker KE, Wals AEJ (2004) Case studies, make-your-case studies, and case

stories: a critique of case-study methodology in sustainability in higher education. EnvironEduc Res 10:7–21

Davis SA, Edmister JH, Sullivan K, West CK (2003) Educating sustainable societies for thetwenty-!rst century. Int J Sustain High Educ 4(2):169–179

Enserink M (2007) Education. Who ranks the university rankers? Science 317(5841):1026–1028Etzkowitz H, Webster A, Gebhardt C, Terra BRC (2000) The future of the university and the

university of the future: evolution of ivory tower to entrepreneurial paradigm. Res Policy 29(2):313–330

Fadeeva Z, Mochizuki Y (2010) Higher education for today and tomorrow: university appraisal fordiversity, innovation and change towards sustainable development. Sustainable Science5:249–256

Florian RV (2007) Irreproducibility of the results of the Shanghai academic ranking of worlduniversities. Scientometrics 72(1):25–32

Fonseca A, Macdonald A, Dandy E, Valenti P (2011) The state of sustainability reporting atCanadian universities. Int J Sustain High Educ 12(1):22–40

382 M. Bouckaert

Page 13: Sustainable Campus Observatory

Glover A, Peters C, Haslett SK (2011) Education for sustainable development and globalcitizenship: an evaluation of the validity of the STAUNCH auditing tool. Int J Sustain HighEduc 12(2):125–144

Hart A, Maddison E, Wolff D (2007) Community-university partnerships in practice. NationalInstitute of Adult Continuing Education (NIACE), Leicester 224 p

Hua Y (2013) Sustainable campus as a living laboratory for climate change mitigation andadaptation: the role of design thinking processes. In: König A (ed) Regenerative sustainabledevelopment of universities and cities. Edward Elgar Publishing Limited, Cheltenham

IPCC (2013) “Climate change, The physical science basis”. WG I contribution to the IPCC !fthassessment report—summary for policymakers, Twelfth Session of Working Group I,Stockholm, 27 September 2013, 36 p

Karatzoglou B (2012) An in-depth literature review of the evolving roles and contributions ofuniversities to education for sustainable development. J Clean Prod 49:44–53

Kelly U, McLellan D, McNicoll I (2004) The economic impact of UK higher educationinstitutions. Universities UK, University of Strathclyde, UK

Kempkes G, Pohl C (2006) The ef!ciency of German universities—some evidence from non-parametric and parametric methods. Appl Econ 42(16):2063–2079

Kuah CT, Wong KY (2011) Ef!ciency assessment of universities through data envelopmentanalysis. Procedia Computer Science 3:499–506

Mian SA (1997) Assessing and managing the university technology business incubator: anintegrative framework. J Bus Ventur 12(4):251–285

Moed HF, Burger WJM, Frankfort JG, Van Raan AF (1985) The use of bibliometric data for themeasurement of university research performance. Res Policy 14(3):131–149

Monjon S, Waelbroeck P (2003) Assessing spillovers from universities to !rms: evidence fromFrench !rm-level data. Int J Ind Organ 21(9):1255–1270

OECD (2010a) PISA 2009 at a glance. OECD Publishing, Paris 99 pOECD (2010b) AHELO assessment design. Group of national experts on the AHELO feasibility

study, Directorate for Education, Institutional Management in Higher Education GoverningBoard, Paris

Rauch JN, Newman J (2009) De!ning sustainability metric targets in an institutional setting. Int JSustain High Educ 10:107–117

Rauhvargers A (2013) Global University rankings and their impact—report II. EuropeanUniversity Association, Brussels 86 p

Rieckmann M (2012) Future-oriented higher education: which key competencies should befostered through university teaching and learning? Futures 44(2):127–135

Shriberg M (2002) Institutional assessment tools for sustainability in higher education: strengths,weaknesses and implications for practice and theory. Int J Sustain High Educ 3(3):254–270

Swearingen White S (2009) Early participation in the American college and university presidents’climate commitment. Int J Sustain High Educ 10:215–227

Uyarra E (2008) The impact of universities on regional innovation: a critique and policyimplications. Manchester Business School Working Paper, p 564

Van Vught F, Ziegele F (2012) Multidimensional ranking. The design and development ofU-Multirank, vol 37. Springer, Dordrecht, 194 p

Vaughter P, Wright T, McKenzie M, Lidstone L (2013) Greening the Ivory Trower: a review ofeducational research on sustainability in post-secondary education. Sustainability 5:2252–2271

Wiek A, Withycombe L, Redman CL (2011) Key competencies in sustainability: a referenceframework for academic program development. Sustain Sci 6(2):203–218

Wissema JG (2009) Towards the third generation university: managing the university in transition.Edward Elgar Publishing, Cheltenham 272 p

The Sustainable Campus Observatory … 383

Page 14: Sustainable Campus Observatory

Wright TS (2002) De!nitions and frameworks for environmental sustainability in highereducation. High Educ Policy 15:105–120

Yarime M, Tanaka Y (2012) The issues and methodologies in sustainability assessment tools forhigher education institutions: a review of recent trends and future challenges. J Educ SustainDev 6(1):63–77

Author Biography

Mathias Bouckaert is a Ph.D. candidate at the Sustainable Campus Chair of theUniversity of Versailles since October 2011. His research covers the methods forapplying the precepts of sustainable development on university campuses. Moreprecisely, the thesis aims to develop speci!cation for the assessment of sustainablecampuses in France: a typology of campus models that adapt to the changing usesof the future, serving academic and scienti!c excellence.

384 M. Bouckaert