14
Comprehensive performance assessment in English local government Chris Game Institute of Local Government Studies (INLOGOV), The University of Birmingham, Birmingham, UK Abstract Purpose – The purpose of this paper is to provide, at a particularly significant point in its short history, an overview of a unique system of performance management to which all principal local authorities in England have been subject for the past three years. Design/methodology/approach – Comprehensive performance assessment (CPA) is the controversial centrepiece of a system of performance measurement and improvement management that has involved the external classification of each individual local authority as Excellent, Good, Fair, Weak or Poor. It is a system that, as comparative data on the scale of local government demonstrate, could only be attempted in the UK. The article is written as a non-technical and evaluative narrative of the introduction, early operation and impact of this system, concluding with the changes in methodology introduced to counter the phenomenon of too many of the nation’s local authorities becoming officially too good for the existing measurement framework. Findings – Key points that the article brings out concern the exceptional circumstances of UK local government that make such a performance management system even contemplatable, the improvement and recovery part of the regime, and the inherent implications of a system geared to providing regular statistical evidence of continuous performance improvement. Originality/value – The originality lies in the CPA system itself, aspects of which at least will be of interest both to specialists in performance measurement and management and to those with an interest in decentralized government and intergovernmental relations. Keywords Performance measurement (quality), Performance management, Decentralised government, Local government, England Paper type Viewpoint The problem with English local government? It’s just too good! It’s official! Two-thirds of major councils are “Good” or “Excellent” 52 promotions, 2 relegations – only one “Poor” council left Islington transformed from “Poor” to “Good” in just two years Coventry’s social service scores propel them out of bottom division Warrington’s corporate assessment holds them back again The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-0401.htm This article is a heavily edited version of a paper “Comprehensive performance assessment in English local government: has life on Animal Farm really improved under Napoleon?”, delivered at the Conference of the European Group of Public Administration (EGPA) in Bern, Switzerland, September 2005. IJPPM 55,6 466 International Journal of Productivity and Performance Management Vol. 55 No. 6, 2006 pp. 466-479 q Emerald Group Publishing Limited 1741-0401 DOI 10.1108/17410400610682497

Comprehensive PERFORMANCE Assessment in English LOCAL Government

  • Upload
    tomor

  • View
    14

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Comprehensive PERFORMANCE Assessment in English LOCAL Government

Comprehensive performanceassessment in English local

governmentChris Game

Institute of Local Government Studies (INLOGOV),The University of Birmingham, Birmingham, UK

Abstract

Purpose – The purpose of this paper is to provide, at a particularly significant point in its shorthistory, an overview of a unique system of performance management to which all principal localauthorities in England have been subject for the past three years.

Design/methodology/approach – Comprehensive performance assessment (CPA) is thecontroversial centrepiece of a system of performance measurement and improvement managementthat has involved the external classification of each individual local authority as Excellent, Good, Fair,Weak or Poor. It is a system that, as comparative data on the scale of local government demonstrate,could only be attempted in the UK. The article is written as a non-technical and evaluative narrative ofthe introduction, early operation and impact of this system, concluding with the changes inmethodology introduced to counter the phenomenon of too many of the nation’s local authoritiesbecoming officially too good for the existing measurement framework.

Findings – Key points that the article brings out concern the exceptional circumstances of UK localgovernment that make such a performance management system even contemplatable, theimprovement and recovery part of the regime, and the inherent implications of a system geared toproviding regular statistical evidence of continuous performance improvement.

Originality/value – The originality lies in the CPA system itself, aspects of which at least will be ofinterest both to specialists in performance measurement and management and to those with an interestin decentralized government and intergovernmental relations.

Keywords Performance measurement (quality), Performance management, Decentralised government,Local government, England

Paper type Viewpoint

The problem with English local government? It’s just too good!

It’s official! Two-thirds of major councils are “Good” or “Excellent”

52 promotions, 2 relegations – only one “Poor” council left

Islington transformed from “Poor” to “Good” in just two years

Coventry’s social service scores propel them out of bottom division

Warrington’s corporate assessment holds them back again

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1741-0401.htm

This article is a heavily edited version of a paper “Comprehensive performance assessment inEnglish local government: has life on Animal Farm really improved under Napoleon?”, deliveredat the Conference of the European Group of Public Administration (EGPA) in Bern, Switzerland,September 2005.

IJPPM55,6

466

International Journal of Productivityand Performance ManagementVol. 55 No. 6, 2006pp. 466-479q Emerald Group Publishing Limited1741-0401DOI 10.1108/17410400610682497

Page 2: Comprehensive PERFORMANCE Assessment in English LOCAL Government

Since December 2002, headlines like these have become standard fare in British localgovernment magazines and journals in what are often their last pre-Christmas issues.UK readers have become used to them, yet to anyone unfamiliar with the uniquenessesof our governmental system, they must seem rather extraordinary. Indeed, but for theodd references to “councils”, “services”, and “corporate assessment” – and the winnersoutnumbering losers by a ratio of 26 to 1 – the whole thing might seem to have escapedfrom the sports pages.

In fact, of course, these headlines have nothing to do with sport, at least in theconventional, organised sense (although, as in all performance measurement exercises,“gaming” skills are at a premium – see Pollitt, 1989). Rather, they are the publicmanifestation of our comprehensive performance assessment (CPA) system – themeans introduced by the Labour Government in 2002, by which all local authoritieswould have their overall performance regularly externally evaluated and scored,thereby, it was argued, prompting a continuous improvement in the quality of theirservice provision.

This article presents an overview of the CPA regime – the way it works, its origins,its methodology, its results, and its impact – at a particularly significant point in itslife cycle. For the problem with a system that has to justify itself by producing regularevidence of continuous improvement is that sooner or later – in this case sooner –almost everyone arrives at or near the top of the scale, with apparently nowhere else togo. When the great majority become good or excellent, the terms lose any meaning theyonce may have had and almost no one is left in need of improvement. This is thedilemma that, after just three years of the CPA’s operation, confronted the Governmentin 2005: England’s local government, as judged by its own appointed assessors, wasjust too good!

Snakes and ladders – but mainly laddersThe idea of CPA is that all principal local authorities (councils) in England[1],regardless of size or range of responsibilities, can and should be comprehensivelyassessed – by the independent (though ministerially appointed) Audit Commission –and then assigned to one of just five performance categories: initially Excellent, Good,Fair, Weak, Poor; now 4 stars to 0 stars. These overall ratings, through a methodologydetailed later in the article, are derived from weighted assessments of a council’s coreservices – education, social care, housing, the environment, libraries and leisurefacilities, welfare benefits – a general assessment of its use of resources, and a gradingof its corporate capacity to improve.

CPA results for England’s 150 major local authorities – London, metropolitan,unitary and county councils – are published annually in mid-December. Excellentcouncils – just 22 in 2002, 41 by 2004 – thus receive, as it were, an early Christmaspresent. For, as an integral part of the assessment and improvement system, theirrating qualifies them for various Government goodies in the form of “freedoms andflexibilities”. They are excused, for example, from producing certain statutory serviceplans for ministerial approval, less of their grant funding is “ring-fenced”, and they aresubjected to a “lighter touch” inspection regime. So, as if they were well-behavedchildren, these Excellent authorities are trusted a little more, given a little morediscretion in spending their pocket money, and suffer a little less constant pestering

English localgovernment

467

Page 3: Comprehensive PERFORMANCE Assessment in English LOCAL Government

from their ministerial parents. Good councils too – 54 initially, 60 by 2004 – arerewarded in a similar, but scaled down, fashion.

Even in the first CPA round in December 2002, Excellent and Good councils totalled76 or – conveniently for a government keen not to alienate any further a sceptical localgovernment world – 51 per cent of all 150 councils assessed. You would scarcely havenoticed it, though, from the national media headlines, as opposed to those of the localgovernment press. For few were the stories that led by noting that a majority of thecountry’s biggest and most important councils had been independently judged eitherGood or Excellent. As ever, the far commoner practice was to express shock and horrorat the unsurprising finding that a small minority – 13 or 9 per cent – had been judgedPoor and a further 22 (15 per cent) Weak.

Some of these latter councils accepted their ratings with a mixture of resignationand the resolution hoped for by the Audit Commission itself, whose metaphor had beenthat these assessments should be viewed less as a league table than as a game ofsnakes and ladders:

Some councils are climbing ladders, others are slipping down snakes. Our job is to kill thesnakes and build the ladders.

CPAs, in other words, were a tool for diagnosing which councils needed to improvetheir performance and in which key service areas. In contrast to the good and excellent,the Christmas present for these “poorly performing” authorities would consist not ofmore freedoms and flexibilities, but quite the reverse. With varying forms ofministerial and external oversight, they would be “assisted” in drawing up andimplementing recovery plans.

For some disappointed councils, though, the first instinct has been to dispute andchallenge the Audit Commission’s ratings. There have been injunctions, court cases,complaints about the CPA’s flawed methodology, scoring systems, and irrationallysimplistic judgements, and accusations of attempted interference from the PrimeMinister’s office. There has also emerged stronger evidence than the AuditCommission would initially admit of the impact of “external constraints” on serviceperformance: a clear relationship between CPA scores and the characteristics – such aseconomic deprivation and ethnic diversity – of a local authority area (e.g. Andrews,2004; Andrews et al., 2005). Large, prosperous and ethnically homogeneous councils(like many counties) were relatively more likely to be scored excellent, and the mostdeprived and ethnically diverse councils (like metropolitan and Inner Londonboroughs) more likely to be weak or poor.

Subsequently, both the Government and the Audit Commission have acknowledgedthe need for the CPA exercise to be refined. Some adjustments were made, and, asdescribed below, more substantial changes were introduced for CPA 2005, making theoverall assessment more demanding. Abandonment of the underlying principle,however, or of the essential methodology has been ruled out – at least, followingLabour’s third election victory in May 2005, for the foreseeable future. Indeed, the headof the civil service, Sir Gus O’Donnell, has begun applying a version of CPA to centralgovernment departments, in the form of Departmental Capability Assessments –significantly, though, without the external validation element and with no mentioneither of the sanction of intervention, both of which are considered necessary for localgovernment.

IJPPM55,6

468

Page 4: Comprehensive PERFORMANCE Assessment in English LOCAL Government

By contrast, the main opposition party, the Conservatives, favour abolishing thewhole “bloated CPA regime” (Conservative Party, 2004), as do some far fromConservative commentators (e.g. Stewart, 2003, pp. 252-3). But the dominant view, evenearly on, was that the Audit Commission had completed a contentious and complicatedjob in a fearfully short time period about as satisfactorily as could reasonably havebeen expected (e.g. Travers, 2005, p. 78). There was a strong feeling across the localgovernment world that the process should be less centralist and more reflective of localpriorities and circumstances (Wilson, 2004, p. 65). But the public position of the LocalGovernment Association (LGA) was that it had set a useful baseline of performance,had re-energised local government’s commitment to improvement, and that overall“the results provide a reasonably accurate picture of the distribution of councilperformance” (LGA, 2002).

That the LGA, under both Labour and, since 2004, Conservative leadership, hasbeen prepared to incorporate its reservations in a continuing contribution “to thedevelopment of the proposals for CPA” (LGA, 2005, p. 1) undoubtedly owes much tothe apparent evidence of local government’s efficiency and steady improvement thatthe CPA classifications have annually produced. In December 2003 and 2004 the AuditCommission published updated assessments for all 150 single-tier and county councils,showing significant overall improvement in their performance (Audit Commission,2003, 2004). As indicated in the opening headlines, 52 of these councils in 2003-2004moved up at least one CPA category, with just two moving down. As a result, 41councils were officially categorised as Excellent and over two-thirds (101) as eitherExcellent or Good. December 2004 also marked the completion of the first CPAs forEnglish lower-tier authorities – the 238 non-metropolitan/shire district councils – andtheir results proved noticeably similar to the first assessments of the single-tier andcounty councils, with 49 per cent in the top two categories: 14 per cent Excellent and 35per cent Good; 36 per cent were judged Fair, leaving just 12 per cent Weak and 4 percent (9) Poor.

More interventionist than ThatcherWithin a four-year Parliament, therefore, CPA had become a major element in therelationship between the Government and local councils – a remarkable achievementfor something that had not been publicly mentioned even in Labour’s June 2001election manifesto, let alone that of 1997; also for an exercise that, on logistical groundsalone, could simply not be contemplated in almost any other large Western Europeancountry.

As shown in Table I, UK local government is organised on a scale several timeslarger than that of its European neighbours, and it is the resulting modest number ofcouncils – just 388 in England for a population of nearly 50 million – that provides thestructural precondition for an intensive external monitoring exercise like CPA. Theother preconditions, of course, are the presence of a political culture and agovernmental will to make acceptable the idea that central government should concernitself in this degree of directive detail with the activities of democratically elected localcouncils. Which in turn raises the question of whether it should be seen primarily as aforce for localism or as a further turn of the centralist screw in a relationship in whichthe centre has had the overwhelmingly dominant hand under governments of bothmajor parties in recent years (see Wilson and Game, 2006, esp. ch. 9).

English localgovernment

469

Page 5: Comprehensive PERFORMANCE Assessment in English LOCAL Government

CPA has been presented by the Blair Government as a key contribution to itsdecentralization policy. The first part of the title of the White Paper in which it wasconceived (DTLR, 2001) was “Strong Local Leadership”, and the introductoryparagraphs were peppered with assertions about ministers’ commitment to “vibrantlocal democracy”, and to “removing unnecessary controls which stifle innovation”.

CPA is, though, an unambiguously centrally run, management-focused reform: toquote the Deputy Prime Minister, John Prescott, “one of the most ambitious exercises inperformance management ever undertaken by central and local government”. It is thusa direct successor to previous “managerialist” programmes – Compulsory CompetitiveTendering (CCT), introduced by the Thatcher/Major Conservative Governments, andits Labour replacement, Best Value (Wilson and Game, 2006, ch. 17). The centraldirection is the same, concessions to local priorities about as minimal, and thepropensity to “take over” – or, in the favoured euphemism, “engage with” – poorlyperforming authorities enormously greater. Any relaxation of control is confinedlargely to those authorities judged, by appointed “outside experts”, to be the highestperformers, while those at the other end of the scale are subject to a degree of centralintervention that Conservative Governments would not have dared.

CPA’s antecedents – CCT and Best ValueOf all the local government “reforms” introduced by the 1979-1997 Conservativeadministrations, arguably the most far-reaching were those associated withCompulsory Competitive Tendering (CCT). CCT required councils to compare thecosts of continuing to provide specified services “in-house” with those of any interestedprivate contractors, and to award the service contract to the most competitive bidder.That meant the lowest bidder, and councils were prohibited from imposing conditionson such issues as trade union rights, employment protection, sickness benefit,

Population(millions)

Number of principallocal councils

Average populationper council

France 60.7 36,782 Communes 1,650Austria 8.2 2,380 Gemeinden 3,440Spain 40.3 8,108 Municipios 4,970Germany 82.4 12,434 Gemeinden 6,630Italy 58.1 8,101 Comuni 7,170Greece 10.7 1,033 Dimoi, Kinotites 10,360Norway 4.6 435 Kommuner 10,500Finland 5.2 444 Kunta 11,710Belgium 10.4 589 Communes, Gemeenten 17,660Denmark 5.4 271 Kommuner 19,930Sweden 9.0 290 Kommuner 31,000Portugal 10.6 309 Municipios 34,300The Netherlands 16.4 467 Gemeenten 35,120Ireland 4.0 39 Counties, cities, boroughs 103,000(England) 49.9 388 Counties, districts, etc. 129,000UK 60.4 468 Counties, districts, etc. 129,000

Sources: Wilson and Game, 2006, Exhibit 12.3 – based on The Local Channel, 2005, pp. 4,7, andindividual countries’ sources

Table I.Britain’s large-scale localgovernment

IJPPM55,6

470

Page 6: Comprehensive PERFORMANCE Assessment in English LOCAL Government

pensions, training, and equal opportunities that might “have the effect of restricting,distorting or preventing competition” (Local Government Act 1988, s.7(7)). Cost wasalways the ultimate criterion, rather than quality, and this centralist rigidity and theprivatising ideology underpinning it caused it, not surprisingly, to be abhorred by allLabour politicians, local and national alike.

The abolition of CCT became a central imperative in Labour’s 1997 manifesto, butthe New Labour Government elected under “moderniser” Tony Blair was concerned,proverbially, not to throw the baby out with the bath water. CCT had to go, butministers were insistent that it be replaced with a regime that, while more concernedwith quality and performance improvement, would continue to emphasise economyand efficiency in service delivery. That regime was Best Value (BV) service provision,subject of the Government’s first major local government legislation, the LocalGovernment Act 1999.

Councils were required to produce an annual BV Performance Plan based in part onregular service-specific and cross-cutting reviews, themselves driven by “the four Cs”:each review should challenge the purpose of the service, compare the authority’sperformance with others, consult the community, and provide for competition whereappropriate. With this information to hand, the council should then select the deliverymethod that would give “best value” to local people. This BV regime, monitored byexternal, independent audit and inspection arrangements, should secure continuousimprovement in the way the council undertook all its service responsibilities. However,persistent performance failure would be referred to the Minister, with a view to“appropriate intervention” and ultimately the removal of responsibility for the “failing”service from the authority altogether.

Best Value, Mark II – Comprehensive Performance AssessmentFollowing the Labour Government’s re-election in June 2001, a new team of ministerstook over responsibility for local government. Significantly, their first majorpublication – the December 2001 White Paper, Strong Local Leadership – QualityPublic Services – conceded and sought to respond to the criticisms of excessivecentralist bureaucracy, micro-management and interventionism that were felt tocharacterise government policy in general and Best Value in particular:

Over the course of this Parliament we will give councils more space to innovate, to respond inways that are appropriate to local circumstances, and to provide more effective leadership.We will provide greater freedom for councils to borrow, invest, trade, charge and set spendingpriorities (paras. 4.6-4.7).

This “space”, however, and the new freedoms would have to be earned – throughministers’ latest managerial invention, Comprehensive Performance Assessment.“High performing” councils would indeed be “rewarded” – with fewer inspections,fewer policy plans to be submitted for ministerial approval, and some loosening ofconstraints over their current and capital spending. “Poor performers”, by contrast,should expect few such freedoms and a great deal of remedial attention, even if theearlier interventionist language had been subtly modified to that of “relationalengagement” (Skelcher et al., 2004, esp. ch. 2).

The radical innovation of CPA was that every council – from Birmingham, with its£2.5 billion budget and responsibility for literally hundreds of different services, to the

English localgovernment

471

Page 7: Comprehensive PERFORMANCE Assessment in English LOCAL Government

smallest district council – would be assigned to one of just five categories. Theassessment system would be similar in principle to that used in the AuditCommission’s “stand-alone” Best Value inspections, which scored each inspectedservice on two four-point scales:

(1) Quality of the service: excellent, good, fair, poor.

(2) Prospects for improvement: yes, probable, unlikely, no.

Best Value, however, operated on a service-by-service basis, whereas CPA would usethe five-point scale already noted: Excellent, Good, Fair, Weak, Poor.

Initial misgivings were widespread, and easy to understand. Best Value inspectionsmay have been hugely time-demanding, intrusive, sometimes even confrontational, butat least the inspectors were focussing on one service at a time about which they mightbe expected to have some specialist knowledge. CPA seemed to demand the impossible:a single snapshot judgement of the performance of the whole authority, on theapparent assumption that that performance will be uniform across its dozens of serviceareas and thousands of employees. Such an assumption flew in the face of empiricalevidence, both from and independent of Best Value assessments, which indicated thevery reverse:

There is very little evidence to suggest that levels of performance vary together acrossservices . . . results imply that performance is not driven by the general characteristics of localcouncils, but by the circumstances, organisation or ethos of specific service departments. It istherefore inappropriate to categorise councils into “high performing” and “low performing”groups across all services (Boyne, 1997, p. 40, cited in Wilson, 2004, p. 66).

The CPA methodology – the performance measurement frameworkThe Audit Commission, however, was assigned the task of devising a measurementframework that would do precisely that (Audit Commission, 2002, p. 2). Like BestValue, it would measure two key elements of a council’s activities:

(1) Core service performance, which covers six principal service areas,differentially weighted: education, social care for children and adults,housing, the environment, libraries and leisure facilities, and welfare benefits,plus a general assessment of the council’s use of resources. Assessmentevidence would include judgments from the Audit Commission’s own and otherinspectorates’ performance indicators, and government assessments of councilplans. Individual service assessments are combined to provide an overallservice score out of 4.

(2) Ability to improve, defined as a council’s ability to lead its community and toimprove services. It is a corporate assessment, comprising two elements: aself-assessment of what the council is trying to achieve, its success in deliveringthose priorities, and its plans for the future, followed by an external assessmentcarried out by a small team, including an auditor and inspector plus officers andmembers from “peer” councils. The outcome is a detailed report on the council’sstrengths and weaknesses, and a grading, again on a 4-point scale, of its abilityto improve.

IJPPM55,6

472

Page 8: Comprehensive PERFORMANCE Assessment in English LOCAL Government

The core service performance scores and ability judgements are then assembled into amatrix that produces the overall assessment of Excellent, Good, Fair, Weak or Poor.

The claim for CPA is that, by pulling together for the first time in a singleframework information held by councils themselves, government departments,auditors and inspectors, it provides the most complete picture yet of their currentperformance, their strengths and weaknesses, and their ability to improve. Whileemphasising that there is no such entity as a “typical” council, the Audit Commissiondoes try to summarise what a council in each of the five CPA categories might looklike, and the areas on which it might need to focus attention in its future “improvementplanning” (Audit Commission, 2002, pp. 3-4).

Excellent councils, for example, will have shown overall that they deliver highquality services, especially in national priority areas such as education and socialservices. They have effective leadership and management arrangements, and are clearabout their priorities, which are linked to local needs and aspirations. Their financesare well managed and are directed at key priorities. Excellent councils are good atachieving more for their communities through the delivery of cross-cutting projects,often in partnership with others.

Poor councils, in almost complete contrast, are likely to offer inadequate servicesand to lack too the leadership and managerial capacity to improve them. Performancemanagement is ineffective and resources are not used to their best advantage. Mostpoor councils are trying to make service improvements, but lack the focus and clarityof priorities to do so effectively. Engagement with local people does not translate intopositive changes or better services to the community. Without external support, theefforts that many poor councils are making to improve services for their citizens areunlikely to lead to lasting change.

Reactions to the early rounds of CPAsIf, following this process, your council is categorised as Excellent or Good, your naturalinstinct is to praise the Audit Commission’s perspicacity for overlooking orunder-weighting the deficiencies that even you know exist in certain aspects of yourservice provision. If, however, you are judged Weak or Poor, you will probably seethings rather differently, particularly if you happen to work in a service areapreviously assigned a Best Value rating of Good or Excellent.

This is the fault line running through the whole CPA process. Everyone who worksin or has anything much to do with local government – not least we ourselves asservice users – knows that the worst managed councils have their areas of strengthand even excellence, just as the very best have weaknesses. Indeed, that is preciselywhat the last two years of Best Value inspections, prior to the introduction of CPA, hadbeen reporting. Large authorities would have had up to a dozen service areas inspectedin that time, with results typically ranging across at least three of the four BV ratings:excellent, good, fair, poor. CPAs focus specifically on “core services” and corporateperformance, so no council should have expected its CPA rating to amount to anaggregate of its previous individual service BV scores. Even so, any comprehensive oroverall rating of a large, multifunctional, multi-service organisation, is bound toconceal almost as much as it reveals.

As already noted, though, much of the potential vehemence of criticism from withinlocal government was undoubtedly lanced by the favourable distribution of the overall

English localgovernment

473

Page 9: Comprehensive PERFORMANCE Assessment in English LOCAL Government

CPA rankings and the largely positive comments that these drew from ministers. In thesecond round of CPAs for single-tier and county authorities, for example, Good andExcellent councils increased from 51 per cent of the total to nearly 55 per cent, the Poorand Weak fell from 23 per cent to under 19 per cent; 26 councils moved up at least onecategory, while just nine moved down. Any system that can produce so many moreapparent winners than losers will not find itself short of defenders – especially whenthose winners are fulsomely praised by the Minister: “There have been realimprovements in local public services over the past year”, Local Government MinisterNick Raynsford said today. “Today’s results show councils rising to the challenge ofdelivering better quality public services. We introduced CPA to drive improvement inlocal government and that is exactly what it is doing. But CPA is not about scoringpoints for the sake of it. Better public services make a difference to people’s lives.Councils which have raised their CPA performance are delivering real improvementson the ground” (ODPM, 2003).

No, the problem for a system geared to demonstrating continuous improvement andthat therefore has to go on producing more winners than losers is that the supply oflosers begins quite quickly to run out – which is what manifestly started to happen inthe third round of CPAs in December 2004.

As we saw in our opening headlines, more than two-thirds of all England’s majorcouncils were by 2004 either Good or Excellent, the latter having increased by 86 percent in just two years. Over a third of councils had moved up at least one category andfive had moved up two – Weak becoming Good, or Fair becoming Excellent – in thespace of 12 months. Poor councils had fallen by 89 per cent in a year and in the whole ofEngland there was now but one left. Such figures inevitably prompt incredulity, butalso a number of serious questions:

. Are these high and ever-improving standards reflected in the perceptions of thegeneral public and service users? Indeed, to what extent are local residents andelectors even aware of their councils’ CPA ratings?

. What is the evidence of the performance improvement of, for example, the 13councils rated poor in December 2002?

. What is the future of a performance measurement exercise in which theoverwhelming majority are good or excellent and there remains only a negligibleamount of poor or weak performance?

The first of these questions can be quickly answered, at least in general terms. Therehas, over the three years of CPA’s operation, been a gradually increasing awareness onthe part of local media and the public that some new form of council grading systemhas come into existence, but any correlation between councils’ CPA ratings and thelevels of satisfaction in their performance that local residents express in opinionsurveys is, at best, weak. There are councils with Fair, Weak or even Poor CPA ratingsthat are well regarded by their residents, while supposedly better performing councilsare significantly less popular.

It would seem that people’s attitudes towards their own councils and towards theperformance of local government as a whole are far more influenced by prevailinglevels of council tax – the single, property-based, tax that is available to UK localauthorities. As it happened – largely because of the impact on council budgets of

IJPPM55,6

474

Page 10: Comprehensive PERFORMANCE Assessment in English LOCAL Government

various central government decisions and policies – council tax in England rose onaverage by 8.5 per cent in 2002/2003 and by 13.1 per cent in 2003/2004, compared to anaverage annual increase of 6.3 per cent in the preceding eight years. Over virtually thesame time period (2001-2004), the average level of satisfaction with individual councilsfell by 10 per cent – at the same time as the CPA exercise was telling those samerespondents that local government performance was good and improving. Only in2004/2005, when the average tax rise fell back to 6.5 per cent did resident satisfactionreturn to approximately its 2001 level. Offered the choice of their council achieving ahigher CPA ranking or leaving them with more of their own money to spend, there canbe little doubt how most residents, if they took the trouble to turn out, would vote.

Recovery of poorly performing councilsContinuous performance improvement, particularly from previously poorlyperforming local authorities, has been a key Government objective at the heart ofboth Best Value and CPA. Other organisations have shared this concern – notably theLocal Government Association and the Improvement and Development Agency(IDeA). But it has been the Office of the Deputy Prime Minister (ODPM), that hasdetermined the rules of engagement with these authorities and that has commissioneda research programme to track and evaluate their organisational turnaround andrecovery (Skelcher et al., 2004; Hughes et al., 2004; Fox, 2003a and 2003b; Boyne et al.,2004; Jas, 2004; Turner and Whiteman, 2005).

In total, 15 councils were classed as “poorly performing” following CPA 2002: the 13assessed as Poor plus two in the Weak category with the lowest score for their “abilityto improve”. By the nature of the CPA scoring system, the 15 were not a homogeneousgroup: some were particularly poor on either services or ability to improve, while somehad been judged poor on both. There were, however, certain “commonalities” behindthe different individual stories that between them were identified by the research teamas providing a key to understanding the underlying causes of unsatisfactoryperformance (Hughes et al., 2004, p. 15ff.). These common themes are:

. Ineffective political arrangements – particular political structures and/orbehaviours that limit elected members’ capacity to exercise effectiveleadership and take collective action to address shortcomings.

. Ineffective managerial arrangements – the breakdown of effective managerialleadership as the result of either excessive change or inertia.

. Weaknesses in relationships with the external environment – a failure to engageeffectively with local communities, or a disengagement from or resistance tomainstream changes taking place in local government.

. Weaknesses in the authority’s culture – an organisational culture/self-imagesignificantly at odds with the views of external inspectors and otherstakeholders.

The mechanisms that contribute to the recovery process of a poorly performingauthority take two distinct, if not in practice entirely separate, forms: the regulatoryand the developmental. Regulatory mechanisms are the structures and systems put inplace by the ODPM, in association with the Audit Commission and other regulators,

English localgovernment

475

Page 11: Comprehensive PERFORMANCE Assessment in English LOCAL Government

“to motivate, supervise, guide and assess the recovery of individual councils” (Hugheset al., 2004, p. 31ff.). They include:

. The lead official – usually a former senior local government manager who hasthe key role in the day-to-day relationship between the ODPM and the council,assisting the council in formulating its recovery plan, identifying externalsources of support, and co-ordinating the Government’s monitoring of theimplementation of the plan.

. The Audit Commission relationship manager – has the role of co-ordinatinginspection activity for any council (not just those subject to ODPM involvement).

. The Government monitoring board – comprises the above two officials, plusother stakeholders with an interest in the council’s recovery. The board providesthe formal mechanism through which the Government assesses the council’sperformance recovery, and makes recommendations to the Minister, othergovernment bodies and inspectorates, and the council itself.

. The liaison board – will replace the monitoring board in councils where theMinister has decided that progress has been sufficient to introduce a lower levelof oversight. As a reflection of this change in their status, local authorities aremembers of liaison boards, rather than merely attendees, as at monitoringboards.

Developmental mechanisms are the arrangements put in place by each authority toorganise, implement and monitor their recovery. They include recovery plans, politicalmentors, improvement boards, interim managers, external support, outsourcing, andvarious other mechanisms (Hughes et al., 2004, c.6).

In terms of the impact on their CPA gradings, the effect of these recovery plans andprocesses was impressive. All but one of the 15 major councils assessed as “poorlyperforming” in December 2002 had, by December 2004, improved their grading by atleast one category, and the first of these improvers had in fact been “released” fromODPM supervision within 14 months of CPA 2002. On the basis of the albeit earlyevidence, an ODPM-commissioned monitoring team gave the following cautiousendorsement of the Government’s strategy (ODPM, 2004, p. 4):

The approaches being used to secure the recovery of poorly performing councils areproducing results in terms of improved performance. The use of “graduated” pressure(related to the situation in each council) and the commitment to non-statutory engagement byODPM has minimized resistance by authorities to ODPM involvement. This has enabledODPM and other external stakeholders to develop an approach to engagement that is tailoredto the situation in each council, rather than imposing a uniform strategy.

Raising the barTo minimise resistance by “poor performing” authorities keen to demonstrate theirrecovery is one thing. Containing the resistance of authorities who are told at one andthe same time that their performance is improving but that their CPA grading hasfallen is an altogether more difficult proposition. That, however, was the task theGovernment and the Audit Commission set for themselves in 2005. For, in order torestore some balance to the five-category system and avoid the derisible situation ofalmost all councils being officially assessed as Good or Excellent, the Audit

IJPPM55,6

476

Page 12: Comprehensive PERFORMANCE Assessment in English LOCAL Government

Commission, following consultations, sought to turn the whole CPA process into whatit termed “A Harder Test” (Audit Commission, 2005a).

The basic framework was retained, but each key element – the service blockassessments, the corporate assessment, the categorisation rules – underwentsignificant, if mainly technical, changes (Game, 2005), with the explicit intention ofproducing a considerably “more stringent test, with more emphasis on outcomesfor local people and on value for money” (Audit Commission, 2005b, p. 11). Foroutside observers, the most visible change was to the actual categories. There arestill five, but the single-word adjectives have been disingenuously transformed intoa scale of 4 stars to 0 stars. There was also a significant addition. Alongside theirCPA score, all councils would have a direction of travel label – improvingstrongly, well, adequately, or inadequately – officially to indicate the state of theirarrangements to secure continuous improvement, but possibly also, it wassuggested, a small sop to those councils finding themselves downgraded in thenew and harder test.

It is perhaps to the Government’s credit that the object of all these changes wasmade quite transparent, and the Audit Commission document referred directly to“raising the bar” by making the CPA test more demanding than it was (AuditCommission, 2005a, p. 12). In truth, though, a more accurate athletics metaphor wouldbe the javelin event, rather than the high jump or pole vault. For this exercise inrecalibration was equivalent to the redesigns of the javelin that were undertaken forsafety reasons in 1986 for men and 1999 for women. The effects too, in severalinstances, were similar: top performers, on the basis of official records alone, appearingto have regressed.

Of the 41 top-tier and county councils rated Excellent in 2004, only 25 achieved4-star ratings in December 2005 (Audit Commission, 2005b, pp. 12-13). The other 16dropped out of the top category – a fate suffered by just one council over the precedingtwo years – ten of whom had the small, and possibly confusing, consolation of beingable to point to a “strongly improving” or “well improving” direction of travel. Thoughnot as extensive as had been anticipated, this cull of top performers was in itself hardlysurprising, being a principal objective of the whole reform exercise. What wasunexpected was that anything like as many as 12 of the 16 would be replaced byformerly Good councils raising their performance in the harder test to the extent ofattaining 4-star status.

A similar fluidity was seen throughout the rankings. Of the 66 3-star councils, forexample, 41 had previously been Good, but, in addition to the 16 former Excellents, 7had been Fair and 2 Weak. In total, therefore, there were still in 2005/2006, as in ouropening headline, over two-thirds of major English councils (70 per cent) officiallyjudged either good or excellent, and a similar, though not completely overlapping,number “improving strongly” or “well”. The early impact of the “Harder Test” wouldappear to have been to leave the overall profile of performance little changed, whilewakening from a three-year somnolence the snakes in the Audit Commission’s snakesand ladders analogy: for the first time as many authorities are slipping down snakes asare climbing ladders.

English localgovernment

477

Page 13: Comprehensive PERFORMANCE Assessment in English LOCAL Government

Note

1. It should be emphasised that this CPA process of performance management is confined toEngland – local government now being a devolved responsibility in the rest of the UK.England has a hybrid structure of 388 principal local authorities: 36 metropolitan districtcouncils, 33 London borough councils, and 47 unitary councils – all of which are actually oreffectively single-tier or unitary – and 34 county councils, which are the upper-tier of atwo-tier system, the lower tier of which consists of 238 non-metropolitan/shire districtcouncils.

References

Andrews, R. (2004), “Analysing deprivation and local authority performance: the implications ofCPA”, Public Money and Management, Vol. 24 No. 1, pp. 19-26.

Andrews, R., Boyne, G., Law, J. and Walker, R. (2005), “External constraints on local servicestandards: the case of comprehensive performance assessment in English localgovernment”, Public Administration, Vol. 83 No. 3, pp. 639-56.

Audit Commission (2002), Comprehensive Performance Assessment: Scores and Analysis ofPerformance, Audit Commission, London.

Audit Commission (2003), Comprehensive Performance Assessment: Scores and Analysis ofPerformance for Single Tier and County Councils in England, Audit Commission, London.

Audit Commission (2004), Comprehensive Performance Assessment: Scores and Analysis ofPerformance for Single Tier and County Councils in England, Audit Commission, London.

Audit Commission (2005a), CPA – The Harder Test: Single Tier and County Councils’Framework for 2005, Audit Commission, London.

Audit Commission (2005b), CPA – The Harder Test: Scores and Analysis of Performance forSingle Tier and County Councils, Audit Commission, London.

Boyne, G. (1997), “Comparing the performance of local authorities: an evaluation of the AuditCommission indicators”, Local Government Studies, Vol. 17 No. 4, pp. 17-43.

Boyne, G., Martin, S. and Reid, S. (2004), Learning from the Experience of Recovery – Policy Paper3: Strategies for Organizational Recovery in Local Government: Retrenchment,Repositioning and Reorganization, Centre for Local and Regional Government Research,Cardiff.

Conservative Party (2004), Local Conservatives Deliver Better Services and Lower Taxes:Manifesto for the English Local Elections, June 2004, Conservative Party, London.

Department for Transport, Local Government and the Regions (DTLR) (2001), Strong LocalLeadership – Quality Public Services, DTLR, London.

Fox, P. (2003a), Learning from the Experience of Recovery – Policy Paper 1: Good Practice inRecovery Planning, University of Birmingham, School of Public Policy, Birmingham.

Fox, P. (2003b), Learning from the Experience of Recovery – Policy Paper 2: Recovery in PoorlyPerforming Councils, University of Birmingham, School of Public Policy, Birmingham.

Game, C. (2005), “Comprehensive performance assessment in English local government: has lifeon Animal Farm really improved under Napoleon?”, paper delivered at the Conference ofthe European Group of Public Administration (EGPA) in Bern, Switzerland, September2005, available at: http://soc.kuleuven.be/io/egpa/qual/bern/Game.pdf

Hughes, M., Skelcher, C., Jas, P., Whiteman, P. and Turner, D. (2004), Learning from theExperience of Recovery – Paths to Recovery: Second Annual Report, ODPM, London.

IJPPM55,6

478

Page 14: Comprehensive PERFORMANCE Assessment in English LOCAL Government

Jas, P. (2004), Learning from the Experience of Recovery – Policy Paper 4: The Role of InterimManagement in Local Authorities Recovering from Poor Performance, University ofBirmingham, School of Public Policy, Birmingham.

Local Channel (The) (2005), “ICT and e-government development for small first-tier councils inthe EU 25”, available at: www.thelocalchannel.co.uk/i2010/Docs/IT2010.pdf

Local Government Association (LGA) (2002), Comprehensive Performance Assessment – FAQs,LGA, London.

Local Government Association (2005), Proposals for Comprehensive Performance Assessmentfrom 2005: LGA Response to the Audit Commission Consultation Paper, LGA, London.

Office of the Deputy Prime Minister (ODPM) (2003), “Performance assessment driving up councilstandards”, news release 2003/0278, 18 December 2003, ODPM, London.

Office of the Deputy Prime Minister (ODPM) (2004), Learning from the Experience of Recovery –Paths to Recovery: Second Annual Report, Research Summary, ODPM, London.

Pollitt, C. (1989), “Performance indicators in the longer term”, Public Money and Management,Vol. 9 No. 3, pp. 51-5.

Skelcher, C., Hughes, M., Jas, P., Turner, D. and Whiteman, P. (2004), Learning from theExperience of Recovery – Foundations for Recovery: First Annual Report, ODPM, London.

Stewart, J. (2003), Modernising British Local Government: An Assessment of Labour’s ReformProgramme, Palgrave Macmillan, Basingstoke.

Travers, T. (2005), “Local and central government”, in Seldon, A. and Kavanagh, D. (Eds),The Blair Effect 2001-5, CUP, Cambridge, pp. 68-93.

Turner, D. and Whiteman, P. (2005), “Learning from the experience of recovery: the turnaroundof poorly performing local authorities”, Local Government Studies, Vol. 31 No. 5, pp. 627-54.

Wilson, D. and Game, C. (2006), Local Government in the United Kingdom, 4th ed., PalgraveMacmillan, Basingstoke.

Wilson, J. (2004), “Comprehensive performance assessment – springboard or dead-weight?”,Public Money and Management, Vol. 24 No. 1, pp. 63-8.

About the authorChris Game is an Honorary Senior Lecturer at the University of Birmingham’s Institute of LocalGovernment Studies (INLOGOV). His specialist interests are in the politics of sub-centralgovernment, and he has published extensively on the topics of elections and electoral reform,political parties, councillors and political leadership, inter-governmental relations, and politicalmanagement. He is joint-author of Local Government in the United Kingdom, the 4th edition ofwhich is published in Summer 2006. He can be contacted at: [email protected]

English localgovernment

479

To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints