View
215
Download
0
Category
Preview:
Citation preview
Knowledge That Counts: Points Systems and the Governance of Danish
Universities
Susan Wright
Introduction
The term ‘governance’ as applied to universities has more than one meaning. It was once widely
used from the fourteenth to sixteenth centuries in England to mean the way an institution like a
university was run, how a landed estate or even a whole country was kept in good order, and how
an individual conducted business by maintaining ‘wise self-command’ (Oxford English
Dictionary 1989 VI:710). In almost all contexts – except universities – these meanings had fallen
in desuetude by the eighteenth century, only suddenly to burst back into use in the 1990s. Their
decline coincided with governing becoming the specialized role of a ‘government’ which,
through the machinery of a centralized bureaucracy, managed the population and economy of a
nation state. The resurgence of ‘governance’ in the 1990s heralded a change in the political
order, when
‘government’ … becomes less identified with ‘the’ government – national
government – and more wide ranging. ‘Governance’ becomes a more relevant
concept to refer to some forms of administrative or regulatory capacities.
(Giddens 1998: 32–33)
There were three main characteristics of this shift from government to governance in the 1990s.
First, instead of the bureaucratic management of a society, governments increasingly
accomplished the maintenance of order and the delivery of services through networks of
agencies and actors operating on global, national and local scales and including trans-national
agencies, international corporations, state and public institutions, arms-length agencies, and civil
society organizations (Rhodes 1997). Governments were to encourage enterprise and
competition by contracting out service delivery to such networks of partners (known in Canada
as alternate service delivery [ASD]) (Osborne and Graeber 1992). Second, what had to be
governed were no longer clear organisational structures but this network of often obscure
linkages. Contracting organisations were free to manage their own production processes or enter
subcontracts with others. Government tried to maintain control through technocratic measures
such as setting performance targets and key performance indicators, conducting audits, checking
contract compliance, and basing payment on the number and quality of outputs (Dean 1999).
Often these technocratic measures acted, in Foucault’s terms, as ‘political technologies’ (Dreyfus
and Rabinow 1982: 196) in that the political and ideological aims of government were not made
explicit but were embedded in the detailed operations of these apparently politically neutral and
purely administrative systems. Third, this system of governing relied on individuals’ freely
exercising their own agency, but, often learning from the pedagogies embedded in political
technologies, they were to exercise their freedom in ways that achieved the government’s vision
of order and contributed to the international success of the competition state (Rose 1989,
Pedersen 2011).
This new meaning of governance echoed the old in that it spanned the three scales of the self-
management of individuals, the running of institutions, and the ordering of a country, now part of
a reconceptualised space of global competition. But between the old and the new meanings of
governance there was an important shift in who had the power to define ‘good governance’. It
was no longer up to people or institutions to maintain their own ‘wise self-command’ in a
bottom-up fashion. Now ‘good governance’ was defined ‘top-down’ and was achieved when the
government’s ideas of the proper order of the country were enacted in the management of
organizations and the conduct of individuals. The apotheosis of this art of government was to
find a single technical measure that would operate on all three scales at once and that would
simultaneously order the competitive state, the enterprising organization, and the
‘responsibilized’ individual according to the government’s ideological and political vision.
This chapter will focus on universities, one of the only institutions that has kept alive the
original idea of governance when it otherwise fell into disuse.1 In that original sense, governance
refers to the array of ways that a university orders its own affairs by managing its relations with
the state, maintaining its own internal organization, and instilling certain values and expectations
of individual conduct. Now this meaning of governance is overlain by the resurgent meaning, in
which it is government that defines the contribution of universities to the competitive state, the
ways that the institution should be organized and managed, and the appropriate behaviour for
‘responsible’ academics and students to adopt. As will be discussed in this chapter, the Danish
government’s reforms of universities are a good example of the introduction of this top-down
form of governance. In particular, the Danish government’s system for allocating a scale of
points for different kinds of research publications was a political technology that aimed to bring
the ordering of the sector as a whole, individual institutions, and academic staff into alignment.
The government used the points system to establish competition for funding between
universities, which was considered a necessary pre-requisite for them to perform well on the
world stage; it made clear to newly appointed strategic leaders what priorities to set for their
organization; and every individual quickly learnt what is expected of them to maximize ‘what
counts.’ In short, the points system was an attempt, through a single mechanism, to set up an
institutional circuit that took governance from the world stage to the self-management of the
individual on the front line and back.
Systems of governance do not always work as designed. The chapter will start by setting out
the two strands of thinking that informed the university reforms in Denmark. One strand was the
reform of the public sector to create a competition state, and the other strand refocused the work
of universities on what the government deemed necessary for Denmark to succeed in a global
knowledge economy and maintain its position as one of the richest countries in the world. In
both strands of the reforms, performance indicators, such as the points system, became an
important mechanism of university governance. The second section summarizes the long process
of designing the points system for the government to use in funding algorithms for the sector,
and for university leaders to use as a tool of management. The third section is based on fieldwork
in a faculty which had long used such points systems. Academics had internalized the system’s
priorities, but had also internalized conflicts between their own motivation and the system’s
incentives, with resultant high levels of stress. The fourth section, based on fieldwork in another
faculty where the points system was a new phenomenon, explores the ways that academics used
different combinations of pragmatic accommodation and principled resistance to the system’s
imperatives, until finally it was withdrawn.2
Governance and the Global Knowledge Economy
A major reform of university governance in Denmark started with a University Law in 2003.
This law was in keeping with the wider reform of the public sector that the finance ministry had
been developing since the 1980s (Wright and Ørberg 2008). Called ‘Aim and Frame Steering’
(mål- og rammestyring), ministers were no longer to run the bureaucratic delivery of services.
Instead, they were to focus on formulating the political goals for their sector and the legal and
budget framework through which they were to be realised. The delivery of these services and the
achievement of the political goals were then contracted out to agencies. In a process Pollitt et al.
(2001) call ‘agentification,’ parts of the bureaucracy and other state-run organizations, like
universities, were turned into such agencies, with the legal status of a person and the power to
engage in contracts with the ministry. The ministry steered these agencies by writing clear
performance goals into the contracts along with numerical and quality measures for their
achievement. For example, the ministry’s contracts with universities contain long lists of the
numbers and percentage rise in outputs of graduates and PhDs, publications, externally funded
projects, and so on to be achieved within a defined period. The state auditor checks annually the
universities’ reports about the fulfilment of these contracted targets. Output and performance
measures have also become more important in the allocation of state funding, on which the
universities are predominantly reliant. Payments for teaching were already (since 1994) entirely
based on the numbers of students who passed their exams each year. Following the 2003 law, the
ministry worked on defining and weighting the criteria for increasingly basing the rest of their
funding on outputs and for allocating this funding competitively between the universities. As will
be shown below, a points system based on the number of publications and proxies for their
‘quality’ became a key mechanism for shifting towards output and performance payments in the
government’s new way of steering the university as one of its public sector ‘service providers.’
While these changes to the steering of universities were clearly part of a reform of the whole
public sector, the minister for research also tied them closely into a strategy for Denmark’s future
economic success. Denmark had been an avid participant in the work of the Organisation for
Economic Co-operation and Development (OECD), which through the 1990s promoted the idea
that the future lay in a global economy operating on a new resource – ‘knowledge.’ This idea
was taken up by other transnational organizations like the European Union (EU), the World
Economic Forum (WEF), and the World Bank (WB). They argued that a future global
knowledge economy was both inevitable and fast approaching. Each country’s economic
survival, they maintained, lay in its ability to generate a highly skilled workforce capable of
developing new knowledge and transferring it quickly into innovative products and new ways of
organising production. The OECD in particular developed policy guidance for its members (the
thirty richest countries in the world) to make the reforms deemed necessary to survive this global
competition. It measured and ranked their performance and galvanized national ministers into an
emotionally charged competition for success and avoidance of the ignominy of failure.
Universities were thrust centre stage in this vision of the future. They were to ‘drive’ their
country’s efforts to succeed in the global knowledge economy. As well as aiming to attract the
‘brightest brains’ through the fast growing and lucrative international trade in students, many
governments set a target for 50 per cent of school leavers to gain higher education, and sought to
reform education so that students not only acquired high-level cognitive skills, but also the
‘transferable’ skills thought necessary for employment in a global knowledge economy. Policy
makers widely adopted the idea that university research should shift from Mode 1 (motivated by
disciplinary agendas) to Mode 2 (motivated by social need) (Gibbons et al. 1994). In a
bowdlerized version of this argument, the Danish government’s catchword for their university
reform was ‘From idea to invoice,’ arguing that academics should develop closer relations with
industry and focus on results that would lead to innovations. The OECD developed checklists
and tool kits, guidance and best practice to help governments reform universities. These included
changing the management of universities to make them capable of entering into partnerships
with industry and the state and of delivering the performance these partners expected.
The Danish University Law in 2003 brought the agendas for both the competition state and
the global knowledge economy to bear on university management. Whereas previously
academic, administrative and technical staff and students had elected the leaders and decision
making bodies at every level of the organization, all these were abolished, apart from elected
study boards, which continued to be responsible for the design, running, and quality of education
programmes. Now a governing board, with a majority of members appointed from outside the
university, appointed the rector, like a CEO of a company. He or she appointed deans, who
appointed heads of department. In what was called ‘unified management’ (enstrenget ledelse),
each leader was accountable to, and had an obligation of loyalty towards, the superior who had
appointed him or her, and was no longer, as in the previous structure, primarily accountable to
the people he or she led. Although a later amendment required the ‘unified management’ to
involve employees in decisions, the faculty and departmental boards and their rights and powers
which had involved members of the university in decision making had been abolished. For the
first time, the rector now spoke ‘on behalf of’ or even ‘as’ the university, as a coherent and
centrally managed organization (Ørberg 2007). This was a clear break with the idea of the
university as a community of academics, administrators, and students.
By changing the legal status, state steering, financing, and management of universities, the
minister claimed he was ‘setting universities free;’ he was both making them into agencies with
the power to enter contracts with the state, industry, and other organizations and he was giving
the new leaders ‘freedom to manage’ – it was up to them how they ran ‘their’ organization as
long as they delivered on contracts. With the rector as the head of a strongly line-managed and
coherent organization, empowered to decide on the strategic use of the university’s funding and
acting as an interlocutor with the ministry, politicians, and industry, the minister claimed that
government could restore its trust in universities. When, shortly afterwards, the minister initiated
mergers between universities and with government research institutes, he felt at least three
Danish universities were now capable of appearing within the top ten in Europe measured by one
of the world ranking tables (Kofoed and Larsen 2010). In his view, universities now had the kind
of organization needed to drive Denmark’s efforts to succeed in the global knowledge economy
and could be trusted with increased government funding to that end. A Globalization Council
was established by the prime minister and produced a strategy that argued that Denmark’s
continuing status as one of the world’s wealthiest countries largely depended on the performance
of its universities (Government of Denmark 2006). To achieve this, a ‘Globalization Pool’ during
the years 2010–12 substantially increased university budgets. In the government’s view, to
incentivize Danish universities to become ‘Global Top Level Universities’, this funding had to
be allocated competitively and on the basis of ‘quality indicators’ (Government of Denmark
2006: 22). Right from the start, academics were worried that the indicators would not just be
used to establish competition within the sector, but as tools for internal management, to allocate
funding between faculties and departments and to incentivize the behaviour and even hire and
fire individual staff (Emmeche 2009b). The ministry’s steering group stated explicitly that the
‘quality indicators’ were expected to have an effect on the behaviour of individual researchers,
motivating them to publish their research in the most prestigious ‘publication channels’ that can
be used to compare research quality internationally (FI 2007; FI 2009b). In the ministry’s task of
devising the output indicators and the formula for the competitive funding system, the agendas of
the public sector reforms and the preparation for the global knowledge economy came together.
By choosing indicators that counted in the world rankings, restructured the sector competitively,
and made clear to each individual what counts, it seemed they had found a mechanism which
brought these three elements of governance into alignment.
Devising a System for Competitive Allocation of Funding
The process of devising indicators that would mobilise the whole university sector, the internal
organisation of each institution and each individual academic and would improve Denmark’s
standing in the global university rankings is presented diagrammatically in Figure 1.
<insert Figure 1. Institutional circuitry: the Danish research points system from individual
performance to world rankings
from accompanying file>
In autumn 2006, the ministry started to look for ‘quality’ indicators for teaching, knowledge
transfer (videnspredning) and research on which to allocate funding competitively between
universities. In negotiation with Danish Universities, it was decided that, for teaching, the
existing calculation of outputs –the number of students who passed their year’s exams–could
also be used as a measure of ‘quality’. This was doubted by some academics who had argued
repeatedly that a system which rewarded faster throughput of students with fewer dropouts and
fewer failures might improve ‘value for money’ but might also, perversely, incentivise the
lowering of standards. The government rejected this argument, claiming it could rely on
academics’ professionalism to maintain standards.3 Paradoxically, the government both designed
indicators to change academics’ behaviour, but also depended on academics resisting these
incentives. The ministry set up working groups to devise new quality indicators for outputs in
knowledge transfer (videnspredning) and research. The knowledge transfer working party
produced a report that was criticized for poorly defining activities, which ranged from industrial
innovation to enhancing public debate and democracy. Eventually, knowledge transfer was
dropped as an indicator.
The working party charged with devising an indicator for research quality began reviewing
available European models. They rejected the U.K.’s Research Assessment Exercise, based on
peer review panels, as too costly in staff time. The Leuven model combined a number of
indicators – PhD completions, external funding, and citation rates for publications. Research
commissioned by the humanities faculties of Danish universities showed that measures based on
commercially produced citation indexes were inappropriate for the humanities, as humanities
faculty published very little in the international journals covered by those firms (Faurbæk 2007).4
It was agreed that there should be one measure for all disciplines. Therefore, the working party
adapted the Norwegian model (Schneider 2009), which allocated differential points to journal
articles, chapters in edited volumes, and monographs depending on whether they were ‘top level’
or not and peer reviewed or not. In this model, ‘quality’ is not assessed directly but relies on the
journal’s or publisher’s peer-reviewing and ‘international’ status (defined as in an international
language and with under two-thirds of contributors from the same country). The Australian
system of auditing and ranking universities called Excellence Research for Australia (ERA)
entailed similar ranked lists of journals until the minister cancelled them at the last minute. He
said this was because university managers were using the lists in an ‘ill-informed and undesirable
way’ to set academics targets for publications in top ranked journals (Carr 2011). In contrast, the
Danish government’s aim was for managers and academics to treat measures as targets.
The Danish model required all academics to enter their publications into their university’s
database each year. These would be put together as a national database and points allocated to
each publication according to an authorized list of which journals and publishers were ‘level 1’
or ‘level 2’. Level 2 journals were defined as the leading international journals that published the
top 20 per cent of the ‘world production’ of articles in a field. To create this authorized list, in
late 2007 the ministry, with the agreement of Danish Universities, set up 68 disciplinary groups
involving 360 academics. They delivered their lists to the ministry in March 2009. The ministry
found that the same journal could appear on two lists at different levels – presumably because it
was central to one discipline but more peripheral to another. When the ministry published its
consolidated list on its web site, immediately 58 of the 68 chairs signed a petition saying it was
not an appropriate tool for distributing funding and asking the ministry to remove the list from its
web site (Forskerforum 2009a; Richter and Villesen 2009). One disciplinary group found 89 of
the journals they had put in the ‘lower level’ had been upgraded to ‘top level’ whilst 30 of their
most important journals had been downgraded (Richter and Villesen 2009). In another
disciplinary group, seven coffee table magazines suddenly appeared in the ‘top level.’ No Danish
journals or Danish publishers appeared as ‘top level’ at all, disadvantaging subjects such as
Danish language, literature, history, and law (Larsen, Mai, Ruus, Svendsen and Togeby 2009).
Overall, one per cent of all the journals academics had selected as important had disappeared
(Larsen et al. 2009). The press confronted the minister, who admitted, ‘It’s not as easy as one
may think to make a ranking list of 20,000 journals,’ and the list disappeared from the ministry’s
web site (Richter 2009; ForskerForum 2009b). The discipline groups were asked to re-work their
lists, but this time each journal was allocated to a specific discipline to avoid overlaps. They
delivered their lists again in September 2009, but 32 of the disciplinary group chairs signed a
statement that they could not vouch for this indicator and their advice was not to use it for
funding allocation (Emmeche 2009b: 2). The disciplinary groups had worked for two years and
still only listed journals; there were no lists of all the publishing houses for monographs and
edited volumes relevant to each discipline, let alone decisions about which of them were ‘level 1’
and ‘level 2.’ The ministry therefore published the ideal version of the points system alongside a
‘temporary’ one. By default, the temporary list seems to have become permanent. It notably
downgraded the points for monographs and edited volumes, which are the publication outlets
used predominantly by the humanities (see Table 1).
Table 1. Danish publications points system
Form of publication Low level
Top level ‘Temporary’
Scientific monograph 5 points 8 points All: 6 points
Article in scientific journal
1 point 3 points Unchanged
Article in edited volumewith ISSN number
1 point 3 points All: 0.75 points
Article in edited volume
0.5 point 2 points All: 0.75 points
In addition, a PhD thesis initially earned two points, a ‘habilitation’ or professorial thesis five points, and a patent one point. Source: FI 2009b
(Later PhD theses were removed from the points system to avoid them counting twice, as ‘completed PhDs’ was already a category used for the distribution of the block grant (See Table 2).
Now that the ministry had its lists and could calculate the research points for each university
each year, it had to decide what weight to give these points in the funding allocation model. An
allocation model had already been developed in the late 1990s, based on 50% for teaching, 40%
for external funding and10% for PhD completions, but this was only used to distribute marginal
amounts in an ad hoc fashion (Schneider & Aagaard 2012: 195). Now the ministry proposed that
research points should be given a 50 per cent weighting, teaching 30 per cent, and knowledge
transfer 20 per cent, but Danish Universities rejected this. In 2009 Danish Universities finally
suggested (echoing the Leuven model) that the indicators should be teaching (45 per cent), PhD
completions (ten per cent), and research (45 per cent). But they argued that research should be
divided into 35 per cent for funding from external sources (e.g., contracts with industry or grants
from the research council) and the research publication points should only be given a ten per cent
weighting, although this would increase gradually to 25 per cent. The government agreed to this
proposal (FI 2009a).
Table 2. Weighting of indicators in the formula for competitive allocation of basic grant
Teaching Externally funded research
Research publication points
Completed PhDs
201
0
45 35 10 10
201
1
45 30 15 10
201
2
45 20 25 10
Source: FI 2009a
The final stage in setting up this system depended on gaining the agreement of enough
political parties to give the proposal a majority in parliament. Danish Universities finally backed
the minister’s ‘authorized list’ and competitive funding formula even though there was still
disquiet among members of the disciplinary groups. The spokesperson of the Radical Liberals,
who had been holding out against this process, took Danish Universities’ approval to mean ‘the
universities’ approved, seriously misunderstanding that Danish Universities was the voice of the
rectors and that, under the 2003 University Law, the university no longer had mechanisms for
speaking collegially. She finally acceded on 5 November 2009, just in time for the system to be
implemented in the Finance Law from January 2010 (FI 2009a).
The new competitive funding formula would not be applied to the universities’ existing block
grants, but only to additional funding, called ‘the globalisation pool’. The political parties agreed
a document (Ministry of Science, Technology and Development 2009), which explained that
they would increase funding for research and development by 10,000 million kroner over three
years, so that public funding of research would meet the Bologna Process target of 1 per cent of
GNP.5 Of this extra funding, 67 per cent was allocated to special initiatives like upgrading
laboratories (1,000 million kroner each year), Danish participation in international innovation
partnerships (30–90 million kroner each year) or collaboration with the private sector (130–190
million kroner each year) and around 200 million kroner per year was used to increase the
teaching output payment per student passing exams in the humanities and social sciences. Thirty-
two per cent of the extra funding was allocated to research. But of this, about a third was
allocated to ‘strategic research’, and further earmarked for the government’s priority research
areas (e.g., bio-products and food research received 50–70 million kroner each year). A further
third was allocated to special programmes in ‘free research’ (Research Council competitive
grants which are responsive to researchers’ initiatives but for which demand far outstrips
supply). This meant, as shown in Table 3, that the globalisation pool increased the universities’
annual basic grant by very little – an increase of 7.8% from 2009 to 2010 and by much smaller
amounts in the following years. Initially only 3.9% of the basic grant was allocated between the
universities on the basis of the points system to which so much administrative and academic
effort had been devoted over the previous three years, although that had risen to 8.9% by 2012.
Even more importantly, an evaluation of the points system in 2012 revealed that the
redistribution effect of the points system, compared to the previous method of allocating the
basic grant, was only 1.6 %. That is, it was responsible for about 11.5 million kroner out of 720
million kroner in 2012, the year when it was most significant (Sivertsen and Schneider 2012:
23).
Table 3. Universities’ Block Grant (Basisbevilling) 2006–12 (in billion kroner)
2006 2007 2008 2009 2010 2011 2012
Total block grant for research
Increase on previous year
6.2 6.5 6.9 7.5 7.7
7.8%
8.0
3.7%
8.1
1.25%
Of which, competitive allocation based on bibliometric points
- - - - 0.300
(3.9%)
0.570
(7.1%)
0.720
(8.9%)
Sources: For 2006-9 2012 budget law. For 2010-12, Sivertsen and Schneider 2012: 23 Table 2.5.
It clearly takes a very small financial incentive to establish a competitive ethos between
universities. For some universities, which historically received comparatively little basic funding
from the government, this new source of funding from research publications could be an
important additional income. But as other universities followed suit, and all increased their
research output, they would find themselves competing over a finite pool in a zero sum game. As
each university increased their research points, the value of each point would decline, yet they
would have to keep up the pace of the treadmill, ever increasing their research output and their
points score, so as to maintain their position relative to the other universities, and their share of
the competitive funding.
True to the new system of ‘Aim and Frame’ steering, the minister used his contracts with
university leaders to commit them to use output and ‘quality’ indicators to create a competitive
ethos throughout their organization. In its contract with the Minister for the period 2006–8, the
university on which this chapter is focused committed itself to developing internal systems for
allocating research funding according to ‘international quality criteria’ in 2007 and to distribute
up to ten per cent of its budget between faculties on the basis of these criteria in 2008. The
rector’s contracts with faculty deans further outsourced this commitment to allocate funding
competitively between departments. For example, the humanities faculty contract obliged the
dean to allocate ten per cent of funding between departments on ‘quality’ criteria in 2008 and the
faculty’s research committee learnt that if they did not develop a method for allocating funding
based on research quality by spring 2007, the rector would withhold 6.3 million kroner from the
faculty’s budget (Humanities Faculty Secretariat 2006).
The research points system exemplifies the art of devising a mechanism which not only
renders a whole complex of activities down to one measure, but also aims to work across several
scales at once. First, the research points system aims to organize the whole sector into a
competition for ‘world class’ status by emphasizing publication in the ‘top’ journals. But the
Danish ‘level 2’ journals are not coincident with those counted in world rankings. The Times
Higher Education ‘World University Rankings’ only uses the 12,000 academic journals indexed
by Thomson Reuters' Web of Science database and only gives a total weighting of 36% to
publication performance anyway.6 The Shanghai Jiao Tong ‘Academic Ranking of World
Universities’ gives a 90% weighting to publications but only counts those in science subjects as
the citation indexes for the humanities and social sciences are so inaccurate (see Figure 1).
Second, the points system induces managers to change their university’s internal organization
and funding allocations to incentivise faculties and departments to prioritize this kind of
research. The points system is emerging, above all, as an instrument for management control.
Third, it makes every individual aware of ‘what counts’ and how to adjust their behaviour if they
are not only to promote their own career, but also do their best for their department’s economy
(and hence their own conditions of work). In the process, the points system imposes an external
definition of value, undermining (or trying to undermine) academics’ ability to exercise their
own judgement about how and on what range of activities to devote their time, energy, and
commitment – which arguably is the basis of their training and the source of their
professionalism.
While such systems are often coercive (Shore and Wright 2000), they are not determinant.
What was the response within universities? I will explore in turn how leaders and academics
reacted in the faculties of life sciences and humanities. These faculties were located very
differently in the knowledge economy and in the quest for ‘world class’ status. The research
points system, as part of the new Aim and Frame form of governance, outsourced responsibility
for achieving the government’s political aims for the sector to managers and academics and tried
to direct their agency within tight boundaries. The next sections will explore how academics
contested and resisted, supported and endorsed, or ‘misrecognized’ the meaning of mechanisms
like the research points system and the ways this system worked together with other elements of
Aim and Frame steering (Wright 2005; Wright and Ørberg 2009). In other words, through their
own strategies and search for ‘room for manoeuvre,’ each of our interviewees is seen as being
actively involved – not all with equal access to institutional power ––in the continual process of
influencing their situation and the system of which they were a part.
Life Sciences: Performance and Expansion
The life sciences faculty covered subjects which the Danish government deemed central to the
global knowledge economy and to its efforts to reposition university research in a globally
competitive, commercialized field. Part of what Shorett, Rabinow and Billings (2003: 123) call
‘a new ecology’ of science, public interest, and market, the life sciences have been restructured
to form international complexes of venture capital, biotechnology firms, commercial research
labs, leading university departments, and, to a lesser extent, social action and user groups (Latour
1998).7 The life sciences had long been one of the government’s priority areas and had benefitted
from special funding for at least a decade. In 1999 and 2009, the Strategic Research Council
reserved substantial funds for research in this field and, as mentioned above, the Globalization
Pool contained a special allocation for food sciences (Ministry of Science, Technology and
Development. 2009).
The government’s points system accorded well with the life sciences’ existing pattern of
publication in high prestige journals, and indeed the faculty had operated a similar research
points system for many years. In the late 1990s, the faculty had introduced a system of
performance indicators to steer its biennial budget. These indicators included the number of peer-
reviewed publications, how much they were cited, and the impact factor of the journal they were
published in, measured over a six-year period. They also included the amount of external
funding, and for the faculty as a whole this had risen from 200 million kroner in 1999 to 386
million kroner in 2007. These established systems for assembling ‘performance’ data were very
close to the government’s points system and its formula for the competitive allocation of funding
between universities. In the life sciences faculty, the government’s points system slid into place
in the steering arrangement with barely a murmur.
The life sciences faculty had welcomed the 2003 University Law, and especially the
appointment of strategic leaders. The life sciences department we studied had a strategy of
continual expansion and, as we shall see, did not have the same wariness about political hostility
or foreboding about cuts that marked the humanities faculty. It was at the forefront of
developments which mirrored the government’s image of the future university. But this
department was also marked by very high levels of stress. Among many accounts of stress that
we heard in interviews, members of the department were especially shocked by three cases
(concerning both academic staff [VIPs] and technical and administrative personnel [TAPs]) in
recent years. One had collapsed lifeless in the corridor at work, resembling the descriptions of
karoshi in Japan. Two had experienced the same kind of collapse at home and described how
they suddenly could not function at all – they could not read, mark exam papers, write reports, or
do anything at all. They had been on sick leave for several months and their experiences
prompted colleagues to reflect over the consequences of high levels of pressure at work. The
department had participated in a trial analysis of their work environment (APV) to identify the
causes of stress, had developed an action plan, and made stress a subject of open discussion. But
still cases were occurring and causing alarm.
In this department, with its mature experience of the kind of ‘performance culture’ that the
government was trying to instil in universities, there were three main themes that ran through
interviewees’ accounts of their attempts to fathom why the steering arrangement was causing so
much stress, and what to do about it. The first theme was continual expansion. Even though the
department had always scored best on the performance indicators and gained a good share of the
faculty’s basic grant (basisbevilling), this source of funding only amounted to about 30 per cent
of the departmental income. Nor (in sharp contrast to the humanities department) was income
from teaching very substantial or important. Most of the department’s income came from
external funding. Their leader described how he was continually rung up and asked to join
research collaborations with Danish and international industry, and large EU projects. The
department had recently successfully applied for a multi-million kroner grant from a Danish
foundation. In this faculty, departmental leaders received the overheads on these externally
funded projects and received their basic funding as a block grant. Departments were empowered
to decide how to use their combined income to cover salaries, running costs, and their own,
locally determined, developments (e.g., new posts or new courses). In addition to employing the
TAPs and PhD students necessary to carry out the research, this department used its external
project funding to establish further professorships and lectureships, on the grounds that the
people appointed would soon bring in more than enough external funding to cover their own
salaries. There was a ‘gung-ho’ attitude to continual expansion, putting the space, the research
equipment, and the 50 TAPs crucial to the conduct of experiments and trials under considerable
strain. Laboratory staff had computer charts with elaborate plans to fit every test for every
project into ten minute slots in the laboratories over the next six months. Two TAPs had broken
down from stress when VIPs were cross that their projects did not have enough time or space.
The TAPs, and their facilities, were running at full capacity all the time, and they described these
signs of lack of appreciation or respect as ‘the final straw’ that caused a breakdown.
Both academic staff (VIPs) and administrative staff (TAPs) felt insecure in their
employment.8 The department’s basic funding (basisbevilling) was used to cover the whole
salary of the department leaders and main administrators, and the leaders of the five research
groups, into which all the 90 or so VIPs were organized. A small number of assistant and
associate professors and some TAPs were also funded from the basic funding. But the salaries of
most VIPs and TAPs were either entirely covered by externally funded projects or made up of a
mix of basic and external funding. Even those who were on permanent contracts explained that
they felt very insecure because they did not know if they were on what they called ‘finance
ministry’ money (basisbevilling) or external project funding, or ‘if they [the leaders] will be
successful with the next application’ and even so, ‘if there will be work in it for me.’ The
department’s strategy of continuous expansion made VIPs and TAPs dependent on the leaders,
with little power to control their own futures. Equally, it put leading VIPs under pressure to keep
up a continual flow of externally funded projects if their research group and TAPs were to stay in
employment. Each time a major new grant made a step rise in their income, and resulted in the
appointment of yet further staff, this created a pressure to keep the future income at this
expanded level.
This unending expansion was not a tenable strategy, especially as the government’s reforms
reduced the leaders’ room for managing the department’s economy. In the previous system, the
faculty’s well-established budget planning gave departments a two-year horizon so that they
could determine their own priorities and developments. The government’s new steering system
was based on annual budgets, which were announced just before the start of the financial year in
question. Not only had the government instituted a system of short-term funding decisions, but
much of each university’s ‘block grant’ was ring fenced for particular government-determined
expenditures. Through these mechanisms, government micro-managed the universities and left
them reacting to sudden government-induced threats to their liquidity and solvency (Wright and
Ørberg 2009). When the government used the steering arrangement in this way in 2009, 9 it
exposed the vulnerability of strategies of continual expansion. There were firings of academic
staff in the Life Science faculty, among others, and intensified pressures and insecurities.
The second theme was ‘performance.’ Departmental meetings we observed focused, among
other things, on the number of publications and the amount of money ‘brought home’ since the
last meeting. At these meetings, several people made amusing asides that they could not
remember what the article was about but they could remember the name of the prestigious
journal where it was published. We did not hear discussions of what the publications contributed
to the research field. Performance was not only what counted, but in this department, it was what
mattered. Everyone we spoke to knew that creating outputs within a specified time was crucial if
they hoped to build a career in the department.
There had been such pressure to perform that one of the actions taken to reduce stress by the
local employer–union consultative committee (LSU) was the production of a paper, setting out
the average annual performance outputs expected of each permanently employed VIP. Under
‘research’ this lists: make funding applications to maintain your own research; be the first author
on one publication; contribute to other publications as co-author; give one national and one
international research paper and one event disseminating science to the public. This had
apparently done much to reduce the pressure to perform to more realistic levels. But still there
was a frenetic pace of ‘performance.’ We observed a PhD student’s public presentation of her
work in progress, for example, where she was advised to write an article on her next trans-
Atlantic plane journey so as to up her rate of output and meet her completion deadline. Another
PhD student expressed frustration that all that counted were these narrow measures of
performance, whereas students developed themselves across many dimensions in the course of a
PhD. We heard several accounts of PhD students unable or unwilling to perform like this, who
had either left because of stress or, as one told us, had taken a job in the private sector because it
was a more caring environment.
Performance is also unremitting, and both VIPs and TAPs revealed in their accounts of their
working life that they had been constantly working up to the hilt over many years. Sometimes
cases of stress were attributed to a family crisis, but it seemed that these people were already
working flat out and they had no spare capacity when a family illness or problem demanded their
attention. When university mergers took place in 2007, and the formerly well functioning
administrative systems were replaced by ones that did not work, the university had not set aside
funds to help with the transition. The departmental administrator who was already over-working
found herself working late at night and, unable to sleep, coming in again in the early hours of the
morning to try and make the finance, personnel, and other systems work so as to keep abreast of
the department’s administration. This departmental administrator, under the stress action plan
(mentioned above), was the contact point for anyone exhibiting stress symptoms, or for anyone
who saw a colleague with such symptoms. She was empowered to intervene immediately and,
following discussion with the person concerned, change his or her work commitments so as to
relieve the pressure and give a chance of recovery. Yet in trying to carry the additional
administrative burdens arising from the merger, she herself collapsed from stress and was on sick
leave for six months.
Several VIPs tried to take collective initiatives to solve problems with the work environment.
One research group leader with two senior colleagues had made a concerted effort to create an
anxiety-free environment in their research group. That research group was located at distance
from the rest of the department. They all met in their ‘homely’ kitchen at lunchtimes and the
relations between lecturers and PhD students was fairly ‘equal,’ judged by the way they all
initiated topics of conversation – although TAPs were more silent. They had a group meeting
every week, and every third week the academics met in the kitchen for a work-in-progress
seminar, clearly held in an established atmosphere of constructive comment. They had created
opportunities to try out preliminary ideas, discuss drafts, and get supportive, critical responses,
with, as they said, no finger pointing. The research group leader acted as a buffer between the
group and the wider institutional environment, but the group was integrated financially and in
terms of decision making into the department and could not insulate itself in the same way as, it
will be seen, the research centre in the humanities could.
The third theme which occurred frequently in interviews was time. There were two
conceptions of time. ‘Performance time’ was that spent on funded projects and creating outputs
that count. Time spent working with colleagues or students to discuss an idea or develop a skill
that did not count directly towards an output was ‘invisible time’, or, according to the dominant
logic, ‘wasted time.’ For example, we saw a senior PhD student work with a new PhD student to
refine the design of the latter’s questionnaire. Similarly, a lecturer explained how hard she had
worked to develop a student’s writing skills and a little later in the interview said that a major
source of stress was not being able to work out where her time had gone. Some people put a
considerable amount of ‘invisible time’ and effort into developing colleagues’ skills and abilities
that they needed for performance, or facilitating groups and making sure projects worked well.
Others did not. One interviewee told us that when one of his PhD students was about to go down
with stress, a new supervisor was found who worked with the student to solve the problems with
the thesis and submit on time. His focus was so strongly on ‘performance time’ that this second
kind of activity (that some people take on voluntarily or that is offloaded onto them by others) is
not even associated with time – it is ‘invisible time.’
Several interviewees made the point very strongly that individuals are responsible for their
own time. The department leader explained:
If you are a lecturer, you have total research freedom … You have to do some
teaching and you are supposed to do some research. So you will apply for some
grants … for different projects. And of course if you are successful, and have
grants for three different projects, you will be quite busy. And you maybe need to
spend all your time to execute these different projects. But, you know, that’s your
own problem. You have been too successful. And you didn’t include in the budget
funding for a PhD student and a Post Doc that you could hire to do most of the
work for you.
Other VIPs held strongly to the idea that they were responsible for their own time, but they spoke
from the position of the person in the above account who had brought the problem on himself or
herself by taking on too much. This idea, that academics had the right to control their own time
used to be a central component of the concept of academic freedom in a previous steering
system. VIPs seemed to attribute this ‘old’ meaning to a concept which, in this department, had
shifted to responsibility for their own time. In the context of the way the steering arrangement
worked in this department, being responsible for managing the pressure for ever-increasing
performance had become a form of self-exploitation and source of stress. This department lacked
a system for allocating hours (such as found in the humanities department), which VIPs could
use to protect themselves and negotiate with leaders; that is, a system that would record not just
time allocated to funded projects but also a tariff for teaching, for departmental services, and for
the currently invisible work of facilitating research processes and staff development – and maybe
even for free research.
As Bovbjerg (2011) has argued cogently, the modern worker is meant to exhibit the ability
and will to expand their capacities endlessly and to take on ever more challenges at the same
time as they are made responsible for deciding when and how to say ‘no’ to the pressure from
their leaders for ever increasing performance. Interviewees recorded that the department leader
and department administrator responded immediately and positively if they ever said that they
had too many tasks; but they also conveyed the enormous strength and courage needed to say
‘no’ in this can-do environment of continuously expanding performance. This was especially the
case where VIPs had not just too many projects demanding performance time, but also engaged
in staff development and group facilitation and other tasks consuming invisible time. They met
all their performance commitments and wrote up current projects, published in international
journals and earned their research points, but what they especially missed out on was time to
write pieces reflecting on how a number of their projects over years had contributed to the field,
or papers that identified gaps and shaped the future of the field, or text books to influence a
future generation. I asked one interviewee whether she had ‘free research’ time, not tied to work
on projects, in which to do such writing, but she did not recognize what I meant.
One of the department’s initiatives to handle stress was to encourage employees to exercise
tight control over their working hours and keep evenings and weekends work-free. Two VIPs
described how they used such strict control of time to manage the pressure to perform, and were
very efficient and productive within working hours as a result. But both also said they had ‘learnt
to limit their ambition.’ The members of this department knew that what counted were externally
funded projects and articles in ‘level 2’ international journals. As modern kinds of self-managing
workers, they were highly effective in producing these outputs. Yet, when the work for which
people ‘burned’ was continually displaced by more pressing project work and outputs,
sometimes for months or years, the displacement of passion by performance was clearly causing
stress.
Humanities: Uproar over Points Systems
Whereas the creative industries are seen as key to the knowledge economy in many countries, in
Denmark, with its focus on life sciences, pharmaceuticals, information technology, and
engineering, the government deemed the humanities largely irrelevant. Humanities faculties felt
threatened by a history of government hostility, even though successive pieces of research
demonstrated their students’ employability and success in using their education in the knowledge
economy (FI 2004; Arbejdsgruppen vedr. oplevelsesøkonomi 2005; Hesseldahl, Nørregård-
Nielsen, Lauridsen, Skov, Kyndrup, Holm and Øhrgaard 2005; Ministry of Science, Technology
and Development. 2005; Copenhagen University, Aalborg University, Aarhus Business School,
Copenhagen Business School, Roskilde University Center and University of Southern Denmark
2008). The points system was important for the humanities faculty as, although they had many
links with industry and civil society, these did not yield substantial external funding and their
income relied heavily on the government’s output payments for teaching and ‘basic’ funding for
research.10 ‘Aim and Frame’ steering through output indicators and competitive funding was
completely new for the humanities faculty. Although the newly appointed faculty leadership
caught the government’s impetus towards the future knowledge economy and decided that it and
the points system were inevitable, many academics saw it as another threat to their academic
work and values, and some sought to resist it on principle.
When the government first announced that it would develop a research points system, the
faculty leadership took a pragmatic approach. They joined with all the other university
humanities faculties in Denmark to document current publication patterns (Faurbæk 2007) and to
lobby for monographs and edited volumes to be given points comparable to journal articles (see
above). The prodean in a public presentation explained how she spent over a year lobbying the
ministry to include monographs in the point system, as otherwise the results would have been
‘very worrisome’ for humanities. The faculty leaders’ contract with the rector committed them to
develop a competitive system for allocating funding internally, and they took the opportunity to
develop a system suited to the humanities and used it to influence the university leadership’s and
the ministry’s criteria and funding formula (Humanities Faculty Secretariat 2007a). The
humanities faculty leaders set up two committees to develop points systems for the quality and
output of research and knowledge spreading, mirroring those initially established by the ministry.
Even when the ministry’s committee for knowledge spreading collapsed in disarray, the faculty
continued with theirs on the grounds that knowledge dissemination is such an important task for
the humanities. These faculty committees produced two points systems that inter-locked to form
a continuous scale. At the top end, 90 points were given for a habilitation thesis, and the scale
descended through peer-reviewed monographs, journal articles, not-peer-reviewed anthologies,
school text books, dictionaries, translations, computer games, theatre productions, the holding of
conferences, museum exhibitions, newspaper feature articles, consultancies, courses for firms,
public lectures and debates, to interviews with journalists, which earned 1/3 point at the other
end. Listing 59 items in 17 categories, this points scale carefully tried to capture and calibrate the
range of academic activity in the humanities. The faculty leaders worked hard for two years to
try and shape the government’s, the university’s, and the faculty’s steering arrangements in ways
that protected, or at least did not further damage, the political standing and funding of the
humanities.
When the faculty leaders unveiled their points system for research quality and circulated it
for consultation within the faculty, the uproar that ensued revealed the appointed leaders’ lack of
communication channels with academics. They were accountable to the leaders above them, and
as one said, were motivated by the desire to ‘perform.’ According to committee minutes, the
prodean relied on the appointed department leaders communicating with their ‘hinterlands’ about
the points system (Humanities Faculty Secretariat 2006). But department leaders were appointed
to manage academics, not represent them, and, with the abolition of department boards, had few
means to do so. Many academics had heard nothing about their leaders’ lobbying work in the
university and with the ministry. Many were surprised, according to the minutes of an open
faculty meeting, when the prodean explained that the points system came from a political
demand: that the government’s Globalization Council had determined to allocate extra funding to
universities competitively, on the basis of research quality; how this had been translated into the
ministry’s contract with the university, committing the latter to allocate ten per cent of its basic
grant internally on the basis of a quality measurement system; and how the rector’s contract with
the dean stated that, if the faculty did not develop a system for measuring research quality, the
rector would withhold 6.3 million kroner from the faculty’s budget (Humanities Faculty
Secretariat 2006, 2007c).
This explanation assuaged some, but the uproar still did not die down. There was a
widespread feeling among academics that the faculty leadership should have resisted this form of
steering. They rejected the leadership’s view that the points system was inevitable and argued
that they should have stood out against the government’s demand on principle – as one of them
said, ‘The ministry is not God. And even God was negotiated with by Abraham’ (author’s
translation). In January 2007, 128 people from the humanities faculty signed a petition against
the introduction of the points system on the grounds that it reduced a complex and diverse
research area to arbitrary measures of the number of outputs, not their quality, and it rewarded
speculation in publication strategies rather than more, let alone better, research: ‘It doesn’t
measure anything, but just generates numbers that look like measurements’ (Petition 2007,
author’s translation). The question of where to send the petition brought into sharp relief how the
new, appointed leaders owed commitment, loyalty, and accountability to those above them, not
those below them (Forskningsfrihed? 2007). In the end, the petition was addressed to the
university’s governing board, which under the 2003 University Law was the university’s highest
authority and therefore presumably was the body legally responsible for ‘safeguarding the
university’s research freedom and ethics’ (Folketinget 2003: Clause 2, part 2, author’s
translation).
The introduction of the research points system was one of many changes happening at that
time. A month after the uproar over the research points system, the faculty’s trade union
representatives collectively sent an open letter to the Dean, and there followed other articles and
open letters in the press in what the press called ‘Humanities’ cry for help’. Too many reforms
were happening at once. In addition to the new leadership system and the research quality points
system, the leadership was trying to make the faculty into what they called a modern knowledge
organization with a new committee culture that could respond quickly to changes in the external
environment. For example, study boards (studienævn), the elected bodies which involved staff
and students in the design, running, and quality of each education programme, were consolidated
into one per department, reducing academics’ direct involvement in running their own courses in
the name of saving their time on ‘administration’. In addition, education programmes had to be
quickly reorganized from 44 to 18, there was a new admissions system and a new national
marking scale, and the use of space in the campus was being reorganized. There was a freeze in
appointments until economic management was devolved to departments. Many of these reforms
were criticized for being too hasty, with guidelines that were imprecise and deadlines for
comments that were too short. Sometimes the demands were suddenly withdrawn as
‘miscommunication,’ after people had put in considerable effort (Ammitzbøll 2007;
Baggersgaard 2007; Richter 2007). The trade union representatives’ letter said that many of the
changes felt ‘like a diktat from above.’ There was an ‘unending stream’ of meetings, plans,
schemes, reforms, and requests for comments, which meant less time for teaching and research; a
freeze on filling posts had increased teaching workloads and people were looking for positions
elsewhere. The pressure was breaking academics’ loyalty to the university, and the union
representatives’ letter to the Dean concluded:
All in all, we consider the situation grave. The mood is depressed, insecure and stressed. It is our
responsibility to inform the leaders and request local and central leaders to make a serious effort,
by involving the employer–union committee, to rectify things and re-create a good and
constructive work environment (author’s translation).At about the same time, the faculty
leadership launched a programme called ‘Humanities in the 21st Century’ aimed at creating a
dialogue and collective consciousness within the faculty that could be projected publicly to make
the humanities more visible to relevant interest groups and to demonstrate that the humanities
were good value for taxpayers’ money. A trade union representative reacted to the dean’s
presentation by saying that it was the university’s top-down steering that created work conditions
which militated against a collective identity in the humanities. He said workers’ influence has
gone, everything is detail-steered, they are bossed around, and many have lost their pleasure in
being at work - something that previously characterized the place (quoted in Villesen 2008b,
author’s translation). In what was described as a ‘shouting match’ (Villesen 2008a), the dean
rejected this, reportedly saying that the employees were whining whereas they should see
themselves as privileged: their slack work hours meant it was unknown whether they were
drinking beer at Nyhavn (a popular harbour front sun-spot crowded with bars) or having good
ideas. The dean later apologized in the press for her unfortunate comment and restored an orderly
environment, but the initiative that aimed to create dialogue had instead revealed the gulf
between the leaders and those they now managed.
Academics increasingly used the press to communicate with their leaders. One remarked on
the ‘Nyhavn incident:’
It was as if [the dean] wasn’t the employees’ representative. She appeared like a
politically appointed person that had to tighten humanities up. (quoted in Villesen
2008b, author’s translation)
Another asked for more involvement and trust:
Humanities education is in the middle of an important change process ... But at
the moment, administrative changes are rushed through from above and so
quickly that our own ideas about how the disciplines should look don’t really play
a role. We would like to be co-actors in the change process. (Baggersgaard 2007,
author’s translation)
Others reflected on their alienation: ‘The ownership that academics felt for the department has
now slipped out of our hands’ (Villesen 2008a, author’s translation), and:
When we had self-steering there was a feeling that it’s our department for good or
ill – it’s something we should try to make work well. Now that the ministry and
parliament have made these changes, it’s harder to take such a big responsibility.
Now we’re more like wage workers. (quoted in Villesen 2008a, author’s
translation)
When the dean sent the faculty’s ‘knowledge spreading’ points system out for consultation, it
generated uproar, but this time it was conducted in the national media and on blog sites. One of
the dean’s critics pointed out in a newspaper article that this created the hilarious situation of his
earning a point every time he criticized his dean in the press. The points system, he argued, made
it much easier to write a great number of blogs, of transitory significance, at one point each, than
to write a school text book on literary history for use for years to come, at 36 points. This, he
said, was an invitation to inappropriate behaviour (Villesen 2008b). A newspaper editorial
picked this up, rang four leading scholars from one department in the humanities faculty, and
quoted nonsense from them, like ‘Tjalala bum’ or ‘Beware of the bogeyman.’ Each thereby
earned a point towards their department’s income next year. The editorial concluded that
journalists had a responsibility to spread their points around, and should not just ring one
department in future (Villesen 2008a).
All concerned in these public debates saw the points system as a mechanism aimed to change
academics’ behaviour. There were three main points of view. The first was principled opposition
to a system that aimed to make people respond to an instrumental rationality and change their
professional values and conduct. A debate book explored the logic of the system: chase points
not knowledge; be a good citizen and earn income for your department by producing a large
quantity of lower quality publications out of your existing research; and do not start a new
research area as it takes too long to begin publishing – a path-breaking researcher is a loss-maker
(Auken 2010: 55–56, my translation). The second viewpoint, espoused by faculty leaders, was
principled support for the intended effects of the points system on academics’ behaviour. The
dean referred to the model’s ‘effect on upbringing (opdragelse)’, a word usually applied to
children; and members of the faculty’s research committee talked of the system ‘regulating
behaviour’ (adfærdsregulerende) (Humanities Faculty Secretariat 2007a: 3, 2007b: 5, 2007c: 4).
To the prodean, the incentives were in keeping with academic professionalism and enhanced
quality: competition to get published in ‘top’ journals would improve quality and ‘promote a
publication behaviour that will strengthen humanities in the long run’ (author’s translation). The
third viewpoint recognized a distance between academic professional values and political
pragmatism, a distance that verged on the cynical. For example, the rector, in his response to the
petition, referred to (and implicitly endorsed) the university newspaper’s report of an open
faculty meeting, in which a professor, who is a leading figure in protecting academic freedom,
argued:
It [the points system] has nothing to do with measuring research quality. It’s
because the humanities has need for a system to convince the world, the rector,
and the minister that they get something for their money. So stop calling it a
quality measure; it’s not, it’s a cover [i.e., a screen under which humanities can
hide and survive] … It’s just a way to hold the ferocious wolves at bay (author’s
translation).
The dean responded to the public debate over the ‘knowledge-spreading’ points system in a
similar tone to the rector, almost chiding the protesters for being naïve: she did not believe that
highly trained and intelligent academics would respond to the system’s incentives by writing a
lot of newspaper articles instead of writing a book (Information 2008; Villesen 2008c). There is a
contradiction in this third view. On the one hand, leaders expected academics’ professional
standards, forged in a previous era of governance, to persist. On the other hand, the points
system, as part of a new system of steering and governance, was intended to re-shape the sector,
the institution, and the individual, and get them to behave according to its incentives.11 Overall,
this third view says: This new system of governance is intended to re-shape academics’ conduct,
and academics should merely adopt it cynically as a protective cover, and it should be resisted
by sustaining older academic values.
Humanities Academics’ Responses: Pragmatism, New Opportunities, and Principled
Opposition
At faculty meetings, in the national press, in blogs and in our interviews, academics discussed
how the points system, in the context of the new leadership and steering systems, acted as a
mechanism aiming to change their mode of thinking about themselves and their work. People’s
responses were influenced first by the extent to which they saw the points system as something
acting on them, as against something they could use to their advantage, and, second, to the extent
they were either prepared or able to adopt the leaders’ distance and cynicism in the third
viewpoint above, or took the first viewpoint of principled opposition and felt there should be
professional integrity in their academic persona.
A woman adjunct, who was seeking a lectureship, took a thoroughly pragmatic approach by
avowedly adopting the points system and the government funding model as guides to her
behaviour and use of time. She welcomed them as the long-overdue establishment of a
transparent way of evaluating people on a level playing field on which women would be able to
compete, at last, on more equal terms with men for appointments and promotions. The system
now told her exactly on what she should devote her time and energy: publishing articles in ‘top’
journals to earn maximum points and getting a large EU project to raise the department’s
external funding. Nothing else counted, and she would not put time or energy into anything else.
She regretted the changes to the study boards and the loss of involvement and ownership over the
course on which she taught. She thought the ending of meetings in which the teaching group got
together meant she lacked knowledge over how her input fitted into the whole course, but if this
is what the leaders wanted, she would reduce her ‘departmental citizenship’ and just do exactly
what was required. She showed us the department’s system of allocating hours to different tasks,
and this would be her guide. She would let a pragmatic approach pervade her academic activities
so that the department leader had no excuse but to advertise a lectureship for which she could
apply, and to ensure that her CV stood every chance of succeeding. It would now be clearly
apparent if lesser qualified men were promoted over her.
A senior professor took a more nuanced but still pragmatic approach: he would adapt his
activities to ‘what counts’ in order to be able to continue doing ‘what matters.’ He was involved
in several faculty committees and was aware of the political background to the faculty’s points
system. He objected to this kind of system because it ‘is a screw without end’ – it ‘wants more
and more, spreads its logic and runs by itself. Such a system...becomes progressively more fine-
meshed and more and more complicated, and we have to use more and more time to do it
correctly’ (author’s translation). But he accepted the points system as inevitability in the
prevailing political climate. He praised the dean for designing a points system for ‘knowledge
spreading’ that was suited to the humanities: ‘Before someone pushes a system down over our
heads, let’s make our own, and see if we can sell it [to the ministry]’ (author’s translation). He
felt the faculty leadership had negotiated to get the best result possible for the humanities. Now
he was assessing whether and how he should adjust his own activities. He pointed out that even
though the 2003 University Law required universities to develop closer relations with the
surrounding society, the points system privileged publications in English for an international and
academic-only audience. He could publish a book in English with a ‘top’ press, but only five
people would ever read it and. anyway, humanities had an obligation to write for a Danish
audience. Nor was there a clear division between ‘research’ and ‘knowledge dissemination’ in
humanities: his latest book was based on new knowledge and research funded by the Research
Council, but the publisher also marketed it to education institutions and the general public. How
should this be ‘counted?’ He would continue publishing for a public readership in order to
generate social debate about cultural institutions, but he had become an advisor to the publisher
so as to persuade them to set up peer-reviewing and ensure that his next book would count for
more. Maybe he would just do enough ‘top’ publishing to earn points for his department, and
protect the rest of his time for working on local cultural and communication activities and doing
what he felt a senior academic should do to contribute to society. He pointed out that such a
strategy was perhaps only available to an established professor, who, as he said, could feel fairly
safe.
What concerned him most was the ‘management power’ that went into fulfilling the demands
from above to register and deliver the right numbers. He felt that before the 2003 University
Law, deans and department leaders had not had enough ‘management power’ to go in and sort
out the minority of malfunctioning departments and academics who did not work or behave
properly. Now deans and department leaders had considerable management power, but it was
used primarily to fulfil demands from the ministry and the university leadership. The department
leader comes round to find out how many international conferences he has held this year, as he
has to deliver X-number to the dean. The department is too big, and the top-down demands for
performance by numbers so great, that the department leader cannot use his management power
to sort out the interpersonal problems which need solving to create an enthusiastic and exciting
work environment. It falls to the leaders of the research groups in the department to do the kind
of management that shapes the work environment, but they only get 20 per cent of their time
allocated for both leading their research group and being on the department’s research
committee. Meanwhile, with leaders who are ‘not our own,’ and no forums through which to
exercise more than a corrective influence on actions emanating from above, as he expressed it,
‘we have been put outside the door of our own house’ (author’s translation). He no longer felt a
sense of ownership of his workplace and was less collective minded. Whereas the government
and the leadership clearly wanted a university of academic entrepreneurs, he found himself
acting more and more like a wage labourer. Increasingly cantankerous and scrupulous, he found
himself demanding that if a task had to be done, he needed an allocation of hours for it, and he
would not sit through activities, like the university leadership’s events to foster entrepreneurship,
if he could not see the ‘point’ in them.
Another woman, a fairly newly appointed professor, saw the competitive system as opening
up opportunities for her to pursue her professional dream of the kinds of scholarship that had
been closed off to her before. She had entered into the various initiatives for competitive funding
of research. She had won one of the minister’s prizes as a ‘star researcher’ and had won
competitive funding to establish a research centre and other substantial research grants. This
centre’s annual performance was judged against the publication output of other centres, mainly
from the sciences. The centre had therefore adopted a publication strategy similar to that of their
competitors. They focused on getting research quickly into the public domain and produced a
high number of often multi-authored articles aimed at ‘top’ journals. This strategy mirrored the
ministry’s points system and would score well on the faculty’s points system. They found this
level of output extremely hard for a humanities subject, but they explained that they paid the
price of this competitive strategy in order to create a space for pursuing research topics that had
never gained approval in departments under the old elected leaders and their male cronyism. As
one of her colleagues said:
When there was an elected leader, it was always a man, and he would protect his
own, both in terms of gender and in terms of research agendas … Only a small
elite ever had research freedom; those with permanent positions. A clique set the
agenda for everyone else and if you didn’t like it, then you had to move
somewhere else. This was called solidarity. People with permanent positions
talked about freedom, and half the teaching was done by external [casual]
lecturers, who had no right to do research, and in the name of solidarity they [the
casualized staff] did the teaching.
Within this new system of appointed leaders, upward accountability, and the abolition of
organs for dialogue and influence at departmental level, they had established their centre as a
self-contained oasis with the kind of collaborative, mutually supportive, and ‘flat’ organization
that was not possible under the previous form of governance and before such pockets of
competitive funding became available. The members of the research centre, gathered for a group
interview over lunch, explained how they were able to develop international networks, explore
new research approaches, co-manage projects, and create a supportive environment for each
other and the PhD students, and share in discussion and decision making with an openness and
enthusiasm never possible before. Their strategy was similar to the research group in life
sciences that also tried to create an open and flat structure, but the difference was that this centre
had external funding which made it largely independent of the department. The external funders
did not expect the centre to have this form of management. They expected to interact with the
‘star’ professor as an all-powerful leader atop a pyramid of top-down management; and projects
now had to be large, headed by a single person, with colleagues and PhD students represented as
subordinate and managed. The ‘star’ professor, without whom the centre could not exist, acted as
a buffer between these external demands for power concentration, and the flat, open and co-
operative way that the senior staff and PhD students ran the centre. She also acted as an
ambassador to the rest of the university. She had worked hard to get centre staff appointed to
long-term positions to give them a secure future when the centre’s funding ended. There were
still instances of ‘mysterious’ decisions elsewhere in the faculty, announced by male colleagues,
as evidence of continuing buddying and cronyism, but they had a good relation with their head of
department, and the way that international networking, ‘top’ publications, and external funding
now ‘counted’ meant that they had a new-found recognition and respect that gave them
negotiating power. She used this not only to secure their own research area, but also to play a
role in shaping the new steering arrangement in her own university and, through talks at training
events for research leaders, also nationwide. However, the members of the centre had to be alert
to shifts in the direction the wind was blowing and ‘move faster than the wind,’ which was a
continual pressure. This was a high-octane strategy to use the new steering arrangement as an
opportunity for positive change – a strategy that was extremely demanding, and not open to
everyone.
In sharp contrast, a male lecturer who was establishing a very successful career for himself,
not through a secluded centre, but through research, teaching, and teaching administration in a
mainstream department, took an approach of principled opposition. For him, the research points
system made a travesty of the university. He argued, like the established male professor above,
that it rewarded only high-prestige academic publishing, and contradicted the government’s own
requirement, written into the 2003 University Law, that universities should relate more strongly
with ‘the surrounding society.’ For him it was central for the humanities to write both for an
academic and a popular audience, to give talks to local groups all over the country on topics of
Danish literature, history, and culture, to engage in public debate on the media, and to use their
own judgment about which topics to pursue, regardless of their immediate value to Danish
industry. But he was not prepared to shroud himself in a cloak of performativity under which he
could continue to work on ‘what matters.’ He wanted an integral approach to the pursuit of
academic knowledge running coherently through all his activities. If academics’ work was to be
articulated through the points system, it meant, to be a good citizen in their department, that they
must put all their energy into academic publications that scored points and withdraw from the
other activities. Even worse than this, the points system attacked the scholarship of exploring
ideas deeply and over a period of time. As the point system rewarded quantity not quality, it
invited cynicism and academic gamesmanship, like ‘salami-slicing’ research into as many
articles as possible to earn maximum points. He said that the energy and enthusiasm had gone
out of a previously very vibrant faculty; many colleagues expressed feelings of thorough
tiredness and a loss of thirst for scholarship; and the talk was of searching for jobs elsewhere.
Colleagues were not inactive: they raised local and nation petitions and argued their principled
position strongly in faculty meetings and the media. But their arguments, based on their
professional knowledge and experience, were being made irrelevant in the face of the
government’s and the leadership’s espousal of a need to act urgently to meet a fast-approaching
and inevitable future. This lecturer expressed a feeling of dispossession even more strongly than
the male professor who was put outside the door of his own house. He wanted to inhabit a figure
of academic integrity – one of passionately pursuing knowledge and sharing critical
understanding with students and the general public – which stood in sharp contrast to
instrumental and pragmatic responses to the points system.
A union representative who was on the university’s governing board pointed to the split
between academics’ motivation and the incentives built into the points system:
Incentives only work if academics understand them as a positive support for their
inner motivation. This system does not chime with their inner motivation at all – a
need for recognition and a love of knowledge – to put it a bit pompously. That’s
why people choose to work at a university. It does not make sense to score points
if the points don’t measure what one thinks one should be doing. (quoted in
Villesen 2008c author's translation)
One of the petition organizers put it more dramatically: ‘The points system impinges on the
individual researcher’s actual work in a completely destructive way’ (author’s translation). A
faculty member who had a senior position in the Royal Danish Academy of Sciences and Letters
gave a label to this feeling of an attack on the professional persona:
The research quality model has a potentially damaging effect: that is, the
individual researcher has felt her or himself hit existentially. As the process has
gone on, it has at the same time become clear that it isn’t about the individual
doing their best in relation to the model, and that means the question of self-value
has been separated from the question of points. (Humanities Faculty Secretariat
2007a, author's translation, emphasis added)
This ‘existential stress’ had two dimensions: a threat to their sense of professional identity and
self-worth through a changed relationship to their work; and the change to top-down leadership,
through which they lost responsibility for decision-making and for making their department and
faculty a success. For these academics to adopt a pragmatic cloak in order to protect their ‘real’
academic values was an existential step too far. There were no half measures, and no conception
of the possibility of cynical game-playing, in their reaction to the point systems. If they were to
follow the incentives in the point system wholeheartedly, be good departmental citizens, and do
what it took to earn maximum points, this would undermine the quality of their academic work.
Conclusion
The introduction of a points system to count, value and rank research output and use the results
in a formula for the competitive allocation of funding was core to a new form of university
governance in Denmark. It has been argued above that such new forms of governance rely on
one mechanism to try and re-order three scales of activity at once: the organisation of a whole
sector; the management of constituent organisations; and the ‘wise self conduct’ of individuals.
In terms of reorganising the university sector, the government argued that Denmark
should aim to have at least one university in the world’s ‘top 100’ and that the best way to
achieve this was to make all eight Danish universities compete with each other on the same
criteria. There was little public discussion of the implications of this strategy. First, the focus on
publishing in ‘top’ journals in all disciplines did not accord with the methodologies used in
world rankings. Second, why make all universities compete for funding over the number of
international journal publications when this only matched the profile of some universities? Why
was it in Denmark’s best interests to reorganise the whole sector around this narrow measure of a
‘world top class’ university when the sector was characterised by a diversity of universities, with
some focusing on traditions of radical education and on working for their region? Third, the
standardised allocation of points privileged those disciplines deemed central to Denmark’s
survival in the global knowledge economy and was not equitable in its effects across the sector.
The implications of implementing a standardised points system in different faculties and
disciplines were made very clear by the Danish humanities faculties. Although the leaders of
humanities faculties lobbied hard, the eventual system, as shown above, easily fitted life sciences
and rewarded their English-language, journal-based publication pattern, but it devalued key
elements of the humanities publication pattern. Fourth, the competitive points system created a
‘treadmill’. Universities do not receive a fixed payment per research point; rather, a funding pot
is divided between them according to their share of the year’s total points. Departments, faculties
and universities are pitched against each other in a continual quest to increase their output in
order to sustain their relative share of the funding. As they collectively increase the speed of the
treadmill, they raise Denmark’s total publications output and the total points score, but the value
of each point, or the return on this increased effort, declines. The effects in terms of stress and
collapse were seen in the life sciences department above, but from the government’s standpoint it
is a very cost effective way of getting ever-increasing output from the sector.
In terms of transforming the internal organisation of universities, the points system
became an important management tool. The setting up of a competitive system for internally
allocating funding throughout the organisation was symbolically and practically a means of
demonstrating the existence of a new ‘unified’ leadership. As seen above, the contract between
the rector and the dean of humanities required the faculty to establish a competitive system for
allocating funding between departments faculty. The faculty leadership invested heavily in
developing such a points system for both research and knowledge spreading and treated this as a
sign of their ability to perform their new role and to demonstrate their loyalty and accountability
to the top-down leadership structure.
This new leadership system was predicated on two expectations, which were not
always aligned, about the effects of the points system on the third scale - academics own ‘wise
self-management’. First, it was expected that academics would quickly learn ‘what counts’ and
would adjust their behaviour. But leaders were not consistent about how they expected
academics to respond. Sometimes they expected academics to adopt new values and behaviours;
at other times they expected them both to change their conduct and to sustain the values
associated with the previous system of governance. Two examples were noted above. First, the
teaching payment based on students’ passing exams is meant to encourage academics to increase
student throughput, yet government does not expect output-based funding to act as an incentive
for lowering standards. Instead, the government looks to academics’ old (and now
disincentivised) sense of professionalism to maintain educational quality. Second, whereas the
research points system rewarded the number of publications not their quality, one pro-dean felt
that the premium for publishing in peer reviewed international journals would act as an incentive
to change the publication pattern of humanities’ academics and would thereby improve quality.
Other leaders seemed to admit that the incentives built into the research points system ran
counter to academics’ professional values and advocated a pragmatic approach, verging on the
cynical. They expected academics to respond by doing enough of ‘what counts’ to create a
protective carapace under which they could continue to do ‘what matters’. When a similar
situation faced universities in the UK, Shore and Wright (1999) referred to the schizophrenic
academic, who complied with the demands of audit culture, and yet tried to continue researching
and teaching in keeping with their own academic values as well – until they became too
exhausted to sustain this double workload.
The second assumption, integral to the new form of governance, is that the
system’s incentives would accord with academics’ own professional motivation. In one case
above, it did. The woman professor in the humanities engaged successfully in the new
competitive ethos and expended great effort on writing funding applications and engaging in new
publishing practices. As a result she was able to open up a new space, not available previously,
both to develop a new research field and to establish a ‘flat’ collegial environment. Hansen
(2011) shows that such super-successful academics (also known as project barons) are successful
precisely because they keep focused on their core research agenda throughout their careers. They
find ways to bend the changing funding systems and organisational conditions to their research
needs (rather than continually adapting their research to external incentives, as government
expects). In this case the woman professor used new sources of competitive and external funding
to create a research centre at arms-length from the new ‘unified’ leadership. Just at the time
when universities were becoming coherent, centrally steered organisations, such super-successful
academics acted as buffers between the mainstream and their centres and introduced new
loosely-coupled spaces into the organisation.
Most academics could not find this kind of buffered space in which they could
maintain their academic integrity. Even where a research group in the life sciences tried to create
a mutually supportive environment, they were still integrated into the financial incentives and
decision making systems of the department and the leader could not buffer them as effectively as
the super-successful leader with her externally funded centre, above. In the life sciences, the
system of governance did work across all three scales and the academics knew that they had to
focus their energy on ‘what counted’ (journal publications and external funding) if they were to
have a career in the department or even maintain their employment. The associated ideas of
‘performance’, ‘time’ and ‘competitive expansion’ were so normalised, even hegemonic, that
they were rarely discussed in public. Where people learnt to curb their ambition in order to
perform according to the incentives, and had lost sight of using research freedom to advance
their discipline, the pressures were internalised and exhibited as stress. In the humanities, one
interviewee adapted herself to the new conditions in order to test whether the supposedly
transparent performance criteria would generate more gender-equal appointment decisions in
practice. Another considered how to adopt the carapace of performance whilst continuing
underneath with the full range of work that mattered. But he was very conscious of, and worried
about, changes he perceived in his own behaviour and attitude that smacked of the niggling wage
labourer who no longer had control of his own work environment. In the uproar in the faculty,
academics took a principled stand against the measurement of their work in ways that did not
accord with their professional ambitions and values. As one put it, this was an existential threat.
The British professor of anthropology Marilyn Strathern commented (personal communication)
about a similar problem when audit culture was being introduced in the United Kingdom:
academics are trained to seek the truth, and believe that this truth should integrally inform all
aspects of their persona, so it seems like dissembling when a surface compliance is required that
should be split off from an integral core.
Partly the conflict and confusion over the research points system seems to derive
from the double meaning of governance. Two meanings of governance were in play throughout
these events: the ‘old’ meaning of an individual and organisation keeping themselves in good
order through their own wise self-command; and the ‘new meaning’ where the government
decides the mission and aims for organisations and individuals, sets up incentive systems like the
competitive allocation of funding based inter alia on research points, and expects organisations
and individuals to ‘voluntarily’ order themselves in response. Much of the stress in the life
science and the uproar in the humanities can be attributed to this shift from a ‘bottom up’ to a
‘top down’ form of governance. Expressed in terms of having to limit their own ambition, or
being ‘set outside the door’ of their own house, the existential threat came from government
trying to assume the power to narrowly redefine the purpose of universities and usurp academics’
responsibility for their own work. The uproar in the humanities shows how, where passion and
points are at loggerheads, motivation and incentive came asunder. If the individual’s wise self
conduct, the third scale of governance, does not fall into line with the leaders’ organisational
incentives and the government’s league-table mission for the sector - what Hazelkorn calls ‘the
academic arms race’ (2008: 209) – the supposedly invisible workings of this top-down system of
governance become exposed and can be contested.
Acknowledgements
Very many thanks to Dorothy Smith and Alison Griffith for inviting me to participate in the
invitational workshop ‘Governance on the Frontline, 15–18 October 2009 at York University,
Toronto, Canada. It was an honour and an inspiration to work with Dorothy Smith and the
members of the Institutional Ethnography network, and to get their feedback on an early draft of
this chapter. I am very grateful to the interviewees in the two departments studied and to the life
sciences department for inviting me to present this analysis. Many thanks to Claus Emmeche, not
just for maintaining the blog and web site Forskningsfrihed?, which is an invaluable site for
discussing university reforms, but also for giving me detailed comments on this and earlier texts
about the research points system. Finally, thanks to Kirsten Marie Bovbjerg and Jakob Krause
Jensen for giving me inspiration throughout the whole ‘stress project’ and extremely helpful
feedback on this chapter, and I thank them, Jakob Williams Ørberg and Rebecca Boden for their
comments.
References
Ammitzbøll, Lisbeth. 2007. ‘Nødråb fra humanoria.’ Magisterbladet, 6–7, June.
Arbejdsgruppen vedr. Oplevelsesøkonomi. 2005. ‘Danmark skal vinde på kreativitet:
Perspektiver for dansk uddannelse og forskning i oplevelsesøkonomien.’ Copenhagen:
Ministry for Science, Technology and Development.
Auken, Sune. 2010. Hjernedød. Til forsvar for det borgerlige universitet. Copenhagen:
Informations Forlag.
Baggersgaard, Claus. 2007. ‘Nødråb til dekan.’ Universitetsavisen, 29 March.
Bovbjerg, Kirsten Marie (ed.). 2011. Motivation og mismod. Effektivisering og stress på
offentlige arbejdspladser. Aarhus: Arhus Universitetsforlag.
Carr, K. (2011). ‘Improvements to Excellence in Research for Australia’ Australian Government
media release, 30 May.
http://archive.innovation.gov.au/ministersarchive2011/Carr/MediaReleases/Pages/
IMPROVEMENTSTOEXCELLENCEINRESEARCHFORAUSTRALIA.html
Ciancanelli, Penny. 2007. ‘(Re)producing Universities: Knowledge Dissemination, Market
Power and the Global Knowledge Commons.’ In Deborah Epstein, Rebecca Boden,
Rosemary Deem, Fazal Rizvi and Susan Wright (eds), Geographies of Knowledge,
Geometries of Power: Framing the Future of Higher Education. World Yearbook of
Education 2008, 67–84. London: Routledge.
Copenhagen University. 2009. ‘Rektor’s Briefing on the Financial Situation,’ 20 November.
Accessed on 3 October 2010 at
http://www.humanities.ku.dk/about/management/rector.pdf
Copenhagen University, Aalborg University, Aarhus Business School, Copenhagen Business
School, Roskilde University Center and University of Southern Denmark 2008.
‘Humanistundersøgelsen 2007. Humanisternes veje fra uddannelse til job.’ Copenhagen:
Copenhagen University.
Dean, Mitchell. 1999. Governmentality: Power and Rule in Modern Society. London: Sage.
Dreyfus, Hubert and Rabinow, Paul. 1982 Michel Foucault: Beyond Structuralism and
Hermeneutics. Brighton: Harvester Press.
Düwel, Lene. 2009. ‘Væksten der blev væk’ Kureren, 30 November . Accessed on 12 March
2010 at http://kureren.ku.dk/artikler/november_2009/vaeksten_der_blev_vaek/.
Emmeche, Claus. 2009a. ‘Hvem er danske universiteter?’ Universitetsavisen, 10 September.
Accessed on 5 June 2012 at http://universitetsavisen.dk/debat/synspunkt-hvem-er-danske-
universiteter
Emmeche, Claus. 2009b. ’Mareridt, damage control eller forskningsrelevante kvalitetskriterier?
Notat om faggruppernes forbehold overfor den bibliometriske forskningsindikator
efter niveaudelingsprocessen og indtastning af tidskriftlisterne pr. 15/9-2009’.
Humanistisk Forums Blog. http://humanioraforum.wordpress.com
Faurbæk, Lotte. 2007. ‘Humanistisk Forskningskvalitet. Rapport om det humanistiske
kommunikationsmønster og internationale forskningsmodeller.’ Copenhagen: Humanities
Faculty and Copenhagen University Library.FI (Forskning of Innovationsstyrelsen).
2004. ‘Humanistisk viden i et vidensamfund.’ Copenhagen: Ministry for Science,
Technology and Development. Accessed on 8 May 2012 at
http://www.fi.dk/publikationer/2004/humanistisk-viden-i-et-vidensamfund
FI (Forsknings og Innovationsstyrelsen). 2007. ’Kommissorium for styregruppen til udvikling af
dansk kvalitetsindikator for forskning’ 27 February.
FI (Forsknings og Innovationsstyrelsen). 2009a. ’Aftale mellem regeringen (Venstre og Det
Konservative Folkeparti), Socialdemokraterne, Dansk Folkeparti og Det Radikale Venstre
om ny model for fordeling af basismidler til universiteterne’, 30 June. Accessed on 8 May
2012 at http;//www.fi.dk/forskning/den-bibliometriske-forskningsindikator/aftale-om-
basismidler-efter-resultat.pdf
FI (Forsknings og Innovationsstyrelsen). 2009b. ‘Samlet notat om den bibliometriske
forskningsindikator.’ 22 October. Copenhagen: Research and Innovation Agency.
Accessed on 8 May 2012 at http://static.sdu.dk/mediafiles//A/0/7/%7BA0719ADA-
D762-418B-A97A-DB62C6630B95%7D22.%20oktober%202009-%20Samlet%20notat
%20om%20forskningsindikatorer.pdf
Folketinget (Parliament). 2003. Act on Universities, Act no 403 of 28 May 2003. Accessed 8
November 2005 at http://www.videnskabsministeriet.dk/cgi-bin/theme-list.cgi?
theme_id=138230.
ForskerForum. 2009a ‘Fagligt oprør mod embedsmands-ranglist’ 17 March. Accessed on 11
May 2012 at http://www.forskeren.dk/?p=198
ForskerForum. 2009b. ’ Embedsmands-ranglist trukket tilbage ’, 27 March. Accessed on 11 May
2012 at http://www.forskeren.dk/?p=218
Forskningsfrihed? 2007 ‘Debat: KBH: Målsystemet, der ikke kunne mål’, blog, 28 February.
Accessed 5 May 2012 from http://professorvaelde.blogspot.com/2007/02/debat-kbh-
mlesystemet-der-ikke-kunne.html
Gibbons, Michael, Limoges, Camille, Nowotny, Helga, Schwartzman, Simon, Scott, Peter and
Trow, Martin. 1994. The Production of New Knowledge. London: Sage.
Giddens, Anthony. 1998. The Third Way: The Renewal of Social Democracy. Cambridge: Polity.
Government of Denmark. 2006. Progress, Innovation and Cohesion. Strategy for Denmark in the
Global Economy – Summary. Copenhagen: Globalization Council. May. Accessed on 5
May 2012 at http://www.globalisering.dk/multimedia/Pixi_UK_web_endelig1.pdf
Hansen, Brigitte Gorm 2011 ‘Adapting in the Knowledge Economy: Lateral Strategies for
Scientists and Those Who Study Them’ PhD thesis. Copenhagen: Copenhagen Business
School.
Hazelkorn, E. (2008). Learning to live with league tables and ranking: the experience of
institutional leaders. Higher Education Policy, 21: 193-215.
Henry, Miriam, Lingard, Bob, Rizvi, Fazal and Taylor, S, 2001. The OECD, Globalisation, and
Education Policy. Oxford: Pergamon.
Hesseldahl, Morten, Nørregård-Nielsen, Hans Edvard, Lauridsen, Karen M., Skov, Anne Marie,
Kyndrup, Morten, Holm, Isak Winkel and Øhrgaard, Per 2005. ‘Humanistiske kandidater
og arbejdmarkedet. Rapport fra en uafhængig arbejdsgruppe.’ Copenhagen: Ministry of
Science, Technology and Development. Accessed on 8 May 2012 at
http://fivu.dk/publikationer/2005/humanistiske-kandidater-og-arbejdsmarkedet/
humanistiske-kandidater-og-arbejsmarkedet.pdf
Humanities Faculty Secretariat. 2006. Minutes of the meeting of the Research Committee, 6
October.
Humanities Faculty Secretariat. 2007a. ‘Referat af møde i Forskningsudvalget den 15. marts
2007.’ 1 April.
Humanities Faculty Secretariat. 2007b. ‘Sagfremstilling vedr Forskningskvalitetsmodel’, 30
March.
Humanities Faculty Secretariat. 2007c. ‘Forskningskvalitetsregistrering på Det Humanistiske
Fakultet,’ 20 April.
Information. 2008. ‘Universiteternes virkelighed og journalistens billig grin.’ 29 January.
Retrieved on January 14, 2010 from www.information.dk/153867.
Kofoed, Kristian Lund and Larsen, Jonas Deleuran. 2010 ‘Universitetsrangliste er en prestige’
Information 17 January. Accessed on 11 May 2012 at http://www.information.dk/221702
Larsen, Peter Stein, Mai, Anne-Marie, Ruus, Hanne, Svendsen, Erik and Togeby, Ole. 2009.
‘Forvarsel. Er det slut med at forske på dansk?’ Politiken, 15 March. Accessed on 11
May 2012 at
http://politiken.dk/debat/analyse/ECE669351/forvarsel-er-det-slut-med-at-forske-paa-
dansk/
Latour, Bruno. 1998. ‘From the World of Science to the World of Research?’ Science 280: 208–
209.
Ministry of Science, Technology and Development. 2005. ‘Humanistiske uddannelser i tal.’
Copenhagen: Ministry of Science, Technology and Development.
Ministry of Science, Technology and Development. 2009. ‘Fordeling af globaliseringsreserven
til forskning og udvikling 2010-2012’, 5 November. Accessed on 13 February 2010 at
http://vtu.dk/lovstof/politiske-aftaler/fordeling-globaliseringsreserven-forskning-
udvikling-2010-2012/
Moutsios, Stavros. 2012 The European Particularity Working Papers on University Reform no.
18. Copenhagen: Danish School of Education (DPU), Aarhus University, February.
http://edu.au.dk/forskning/omraader/epoke/publikationer/workingpapers/:
Oddershede, Jens. 2009. ‘Synspunkt - Vi er ikke i lommen på nogen’ Universitetsavisen, 11
September. Accessed on 11 May 2012 at http://universitetsavisen.dk/debat/synspunkt-vi-
er-ikke-i-lommen-paa-nogen
Osborne, David and Graeber, Ted. 1992. Reinventing Government. How the Entrepreneurial
Spirit is Transforming the Public Sector. New York: Plume.
Oxford English Dictionary. 1989. Vol. VI. Oxford: Clarendon.
Pedersen, Ove Kaj. 2011. Konkurrence Staten. Copenhagen: Hans Reitzels Forlag.
Petition to the University’s Governing Board 2007 ‘Concerning the introduction of a point
system to measure research in the humanities faculty’ January 26.
Pollitt, Christopher, Bathgate, Karen, Caulfield, Janice, Smullen, Amanda and Talbot, Colin.
2001. ‘Agency Fever? Analysis of an International Policy Fashion’, Journal of
Comparative Policy Analysis: Research and Practice 3: 271-90.
Rhodes, R. A. W. 1996. Understanding Governance. Buckingham, UK: Open University Press.
Richter, Lise. 2007. ‘Humanister: Reformer dræber arbejdsglæden.’ Information, 25 July.
Retrieved on 11 May 2012 from http://www.information.dk/137756
Richter, Lise. 2009. ‘Sander: Ranglister over forskning er ikke så enkelt’ Information, 19 March.
Accessed on 11 May 2012 at http://www.information.dk/185787
Richter, Lise and Villesen, Kristian. 2009. ’Ministeriets rangliste over forskning er fyldt med
fejl’ Information, 18 March. Accessed on 11 March 2012 at http:// Salmi
www.information.dk/185737
Rose, Nikolas. 1989. Governing the Soul. London: Free Association Books.
Schneider, Jesper 2009 ‘An outline of the bibliometric indicator used for performancebased
funding of research institutions in Norway’ European Political Science: 8: 364-78.
http://ffarkiv.pbworks.com/f/Bibliometric-Indicator-Norway_JW.Schneider2009.pdf
Schneider, Jesper and Aagaard, Kaare 2012 ’” Stor ståhej for ingenting” – den danske
bibliometriske indikator’ in Aagaard, Kaare and Mejlgaard. Niels (eds) Dansk
Forskningspolitik efter Årtusindskiftet. Aarhus: Aarhus Universitetsforlag.
Shore, Cris and Wright, Susan. 1999. ‘Audit Culture and Anthropology: Neoliberalism in British
Higher Education.’ Journal of the Royal Anthropological Institute 5: 557–575.
Shore, Cris and Wright, Susan. 2000. ‘Coercive Accountability: The Rise of Audit Culture in
Higher Education.’ In Marilyn Strathern (ed.), Audit Cultures. Anthropological Studies in
Accountability, Ethics and the Academy, 57–89 EASA Series. London: Routledge.
Shore, Cris and Wright, Susan. 2011. ‘Conceptualising Policy: Technologies of Governance and
the Politics of Visibility.’ In Cris Shore, Susan Wright, and Davide Pero (eds), Policy
Worlds: Anthropology and the Analysis of Contemporary Power, 1–25. Oxford:
Berghahn.
Shorett, Peter, Rabinow, Paul, and Billings, Paul R. 2003. ‘The Changing Norms of the Life
Sciences.’ Nature Biotechnology 21(2): 123–125.
Sivertsen, Gunnar and Schneider, Jesper 2012 Evaluering av den bibliometriske
forskningsindikator Rapport 17/2012. Oslo: NIFU
Universitetsavisen. 2007. ‘Lad som ingenting’ Universitetsavisen 24 April.
Villesen, Kristian. 2008a. ‘Humanoria-konflikt skyldes universitetsloven’. Information, 8
February.
Villesen, Kristian. 2008b. ‘Humanoria-dekan: de ansatte klynker’. Information, 8 February.
Villesen, Kristian. 2008c ‘Vrede over nyt pointsystem’. Information, 25 January.
Weick, Karl. 1976. ‘Educational Organizations as Loosely Coupled Systems.’ Administrative
Science Quarterly 21;1–9.
Wright, Susan. 2005. ‘Processes of Social Transformation: An Anthropology of English Higher
Education Policy.’ In John Krejsler, Niels Kryger, and Jon Milner (eds), Pædagogisk
Antropologi: Et fag i tilblivelse, 185–218. Copenhagen: Danmarks Pædagogiske
Universitets Forlag.
Wright, Susan. 2008. ‘Governance as a Regime of Discipline.’ In Noel Dyck (ed.), Exploring
Regimes of Discipline: the Dynamics of Restraint, 75–98. Oxford: Berghahn, EASA
Series.
Wright, Susan. 2009. ‘What Counts? The Skewing Effects of Research Assessment Systems.’
Nordisk Pedagogik/Journal of Nordic Educational Research (29): 18–33.
Wright, Susan and Ørberg, Jakob Williams. 2008. ‘Autonomy and Control: Danish University
Reform in the Context of Modern Governance.’ Learning and Teaching: International
Journal of Higher Education in the Social Sciences 1(1): 27–57.
Wright, Susan and Ørberg, Jakob Williams. 2009. ‘Prometheus (on the) Rebound? Freedom and
the Danish Steering System.’ In Jeroen Huisman (ed.), International Perspectives on the
Governance of Higher Education, 69–87. London: New York.
Øllgaard, Jørgen 2003. ‘Et træt dødsleje,’ FORSKERforum 164: 1.
Ørberg, J.W. (2007). Who Speaks for the University? Legislative Frameworks for Danish
University Leadership 1970-2003. Working Papers on University Reform no 5, May.
København: Danish School of Education, Århus University.
http://www.dpu.dk/site.aspx?p=9165.
Notes
1 The route of the concept ofsuwr@edu.au.dk autonomous university governance from shared
European origins through the intervening centuries varies in different countries, especially with the
influence of Humboldtian ideas of university autonomy in Germany and Denmark (see Moutsios
2012).
2 The research involved analysing the evolution of the government’s points system through a trail of
government documents, newspaper articles and blogs over three years. It also involved 21
interviews with managers and academics at different stages in their careers, participant observation
at meetings and events, and the analysis of minutes, policies, and petitions generated within the
faculties.
3 This view was given by a former government official who had been instrumental in devising this
exam-based output funding for teaching in an interview conducted by Ørberg and Wright.
4 The Thomson ISI citation index is the most used. The way that the five major publishing firms are
benefitting financially from government policies to publish in ‘top’ journals and count citations and
journal impact, is documented by Ciancanelli (2007).
5 The globalisation pool included new money allocated to universities to lead Denmark’s
engagement in the global knowledge economy. It was also the vehicle for returning to universities
part the of 2% public sector budget cut applied to universities (and all public sector budgets) each
year. Money for upgrading laboratories came from profits from the state system of renting buildings
to the universities.
6 The latest THE methodology gives a 30% weighting to citation scores in Web of Science journals
and a 6% weighting for the number of articles per staff member published in those journals.
http://www.timeshighereducation.co.uk/world-university-rankings/2012-13/world-ranking/
methodology
7 Shorett, Rabinow and Billings (2003: 124) record that in the United States, corporations account
for over half of all national funding for biomedical research and development and supply 14 per
cent of funding for academic research in biotechnology areas. More than 25 per cent of life science
faulty participate in industry relationships, as do 39 per cent of genetic researchers in clinical
departments.
8 There is no academic tenure in Denmark. Lecturers and professors are on permanent
appointments, associate professors and PhD students are on fixed term appointments, ‘external
lecturers’ are casually employed.
9 In November 2009 the political parties agreed the university budget that was to come into effect in
January 2010. Thus, with less than two months’ notice, Copenhagen University learnt that, although
its basic grant would go up by 63 million kroner, so much of it had been earmarked for specific
purposes that the university had a shortfall for funding staff salaries. The university leadership
reduced the basic funding of faculties by 60–70 million kroner, and this translated in cutting 130
posts, mainly in the departments of biology, life sciences and geography (Düwel, Lene 2009;
Copenhagen University 2009).
10 The humanities faculty’s economy was dependent on both the government’s basic funding for
research and the government’s funding for teaching output. The humanities faculty had fewer than
300 externally funded research projects, whereas the life sciences faculty had 1,500, half funded by
Danish public funds, a third by Danish private funds and 250 by the EU or other international
sources. Life sciences had a third more staff than humanities, employed predominantly on research,
with very little teaching. Of the humanities staff, three fifths worked on research, half were teaching
but a sixth were on casual teaching contracts.
11 As mentioned above, the government’s system of payment for university teaching operates on a
similar confusion: it only makes a payment to the university when a student passes an exam, to act
as an incentive for throughput, but it relies on academics’ (previous and now disincentivized)
professional standards to uphold quality.
Recommended