13
Financial Accounfability d Managrmenf, 11(2), May 1995, 0267-4424 AUDITING PERFORMANCE INDICATORS: THE ROLE OF THE AUDIT COMMISSION IN THE CITIZEN’S CHARTER INITIATIVE MARY BOWERMAN* INTRODUCTION The Citizen’s Charter Initiative (‘The Citizen’s Charter - Raising the Standard’, 1991, Cmnd 1599, HMSO) aimed to improve the quality of public services generally and make them more responsive to users. It encompasses a wide range of public services including the Civil Service, Nationalised industries, Quangos, the NHS and local government. One commitment of the Charter is to use standards of service to consumers as performance indicators, as well as for measuring the effectiveness of the provider. The Local Government Act 1992 provided the statutory provisions under which the Charter Initiative would operate in local government. It charged the Audit Commission (AC) for England and Wales, and the Accounts Commission for Scotland,‘ with developing sets of performance indicators which would: Facilitate the making of appropriate comparisons (by reference to the criteria of cost, economy, efficiency and effectiveness) between the standard of performance achieved by different authorities and the standards of performance achieved in different years. The Act also required the AC to publish such information to facilitate comparisons between local authorities and between different financial years. The legislation has placed the AC in the ambiguous and unusual position of defining the information required to fulfil accountability, of overseeing the audit of the information provided and of publishing comparative information. In addition, the AC has chosen to fulfil its duties by providing considerable support and guidance to local authorities. Power (1994, p. 45) notes the increasing dominance of audit in the accountability process by asking: ‘Can we no longer think of accountability without elaborately detailed policing mechanisms?’; and he queries whether the ‘audit explosion’ rests ‘on firm intellectual and practical foundations or . . . [is it] . . . as much a symptom of problems as their cure’. The dominance of audit has, arguably, reached a peak in Britain under the local government Citizen’s Charter Initiative. * The author is Lecturer in Accounting at the University of Hull. She gratefully acknowledges the helpful comments of participants at the EIASM Workshop on Accounting and Accountability in the New European Public Sector at Edinburgh University, December 1994, and also the comments of the anonymous referee. Address for correspondence: Mary Bowerman, Lecturer in Accounting, Department of Accounting and Finance, University of Hull, Hull HU6 7RX, UK. 0 Basil Blackwell Ltd. 1995, 108 Cowley Road, Oxford OX4 lJF, UK and 238 Main Street, Cambridge, MA 02142, USA. 171

AUDITING PERFORMANCE INDICATORS: THE ROLE OF THE AUDIT COMMISSION IN THE CITIZEN'S CHARTER INITIATIVE

Embed Size (px)

Citation preview

Financial Accounfability d Managrmenf, 11(2), May 1995, 0267-4424

AUDITING PERFORMANCE INDICATORS: THE ROLE OF THE AUDIT COMMISSION IN THE

CITIZEN’S CHARTER INITIATIVE

MARY BOWERMAN*

INTRODUCTION

The Citizen’s Charter Initiative (‘The Citizen’s Charter - Raising the Standard’, 1991, Cmnd 1599, HMSO) aimed to improve the quality of public services generally and make them more responsive to users. It encompasses a wide range of public services including the Civil Service, Nationalised industries, Quangos, the NHS and local government. One commitment of the Charter is to use standards of service to consumers as performance indicators, as well as for measuring the effectiveness of the provider. The Local Government Act 1992 provided the statutory provisions under which the Charter Initiative would operate in local government. It charged the Audit Commission (AC) for England and Wales, and the Accounts Commission for Scotland,‘ with developing sets of performance indicators which would:

Facilitate the making of appropriate comparisons (by reference to the criteria of cost, economy, efficiency and effectiveness) between the standard of performance achieved by different authorities and the standards of performance achieved in different years.

The Act also required the AC to publish such information to facilitate comparisons between local authorities and between different financial years. The legislation has placed the AC in the ambiguous and unusual position of defining the information required to fulfil accountability, of overseeing the audit of the information provided and of publishing comparative information. In addition, the AC has chosen to fulfil its duties by providing considerable support and guidance to local authorities. Power (1994, p. 45) notes the increasing dominance of audit in the accountability process by asking: ‘Can we no longer think of accountability without elaborately detailed policing mechanisms?’; and he queries whether the ‘audit explosion’ rests ‘on firm intellectual and practical foundations or . . . [is it] . . . as much a symptom of problems as their cure’. The dominance of audit has, arguably, reached a peak in Britain under the local government Citizen’s Charter Initiative.

* The author is Lecturer in Accounting at the University of Hull. She gratefully acknowledges the helpful comments of participants at the EIASM Workshop on Accounting and Accountability in the New European Public Sector at Edinburgh University, December 1994, and also the comments of the anonymous referee.

Address for correspondence: Mary Bowerman, Lecturer in Accounting, Department of Accounting and Finance, University of Hull, Hull HU6 7RX, UK.

0 Basil Blackwell Ltd. 1995, 108 Cowley Road, Oxford OX4 lJF, UK and 238 Main Street, Cambridge, MA 02142, USA. 171

172 AUDITING PERFORMANCE INDICATORS

This paper examines new work for the AC and local authority auditors implied by the Citizen’s Charter indicators, and assesses this against the literature on performance measurement. The paper is structured in two main sections, the first dealing with the theoretical and conceptual issues of the potential influence of accounting information and audit on political decision making and with the difficulties associated with performance measurement. The second section discusses how the AC and the Auditors have interpreted their roles. This research does not criticise performance audit pm se: the importance of performance audit should not be understated; it offers considerable benefits to decision makers and to the public (Bowerman, 1993). What is questioned is the appropriateness and the extent of audit involvement in this particular case.

ACCOUNTING INFORMATION AND AUDIT

Effect on Political AccountabiliQ and Democracy

The Citizen’s Charter Initiative has increased the role of audit and thereby its potential to control and influence the local government policy agenda; this is partly dependent on the way in which performance indicators may be used. This section explores the literature which suggests that dominance of accounting information can displace political accountability and democracy, and also considers the literature on performance measurement and interpretation.

During the last decade, accounting in the public sector has moved from a subordinate service role to a dominating agenda setting role with increasing emphasis on the measurability of activities, and has fostered a belief that accounting can provide reliable, technical tools to assist public sector managers. Hopwood (1985), Day and Klein (1987), Miller and Power (1992) and Power (1994) have all argued that the reliance on technical accountability, which the use of accounting tools engenders, in turn leads to a gradual displacement of political accountability. Rose (1991) comments on the importance of ‘numbers’ and quantification in politics: ‘Paradoxically, in the same process in which numbers achieve a privileged status in political decisions, they simultaneously promise a “de-politicisation” of politics by re- drawing the boundaries between politics and objectively purporting to act as automatic technical mechanisms for making judgement, prioritising problems and allocating scarce resources’. Similarly, Miller and Power (1992, p. 250) contend that: ‘. . . “effectiveness” issues, such as those involving health care and which involve “high” politics, become increasingly reassigned to new arenas - a “low” politics of technical regulation’.

While the connection between ‘accounting numbers’ and political control has been tentatively explored (Rose, 1991), the potential role of the auditor in legitimising the numbers, and thereby the use of those numbers, has been

0 Basil Blackwell Ltd. 1995

BOWERMAN 173

neglected. While some writers have discussed the auditor’s role in examining policy (Geist and Mizrahi, 1991), it is often with the implicit assumption that the auditor is capable of remaining aloof from politics (this is also the perception of the auditors themselves, e.g. Pendlebury and Shreim, 1990; and Dewar, 1986). Hamburger (1989) argues that an important and influential strand of literature has presented performance audit as a neutral, rational discipline. An alternative view is that, like accounting, audit can be a force for change and control in organisations. Henkel (1991) has observed that the work of the AC has been part of this process and has led to an erosion of the distinction between policy and management. The tendency of accounting information, such as performance indicators, to lead to ‘technical regulation’ suggests that the Citizen’s Charter performance indicators could have a detrimental effect on the democratic process. Power (1994) asks if, instead of the citizen’s charters heralding a new era of popular governance driven by participation and dialogue, they might ‘effectively become an “Auditors’ Charter”, a symbol of the failure of democracy and empowerment rather than its cure’.

The potential of audit to shape and control organisations is recognised (McEldowney, 1993; and Power, 1994). Also, Midwinter (1994) points out the paradox that auditors may not question policy and yet they have a statutory obligation to prepare sources of data for policy analysis. Indeed, McSweeney (1988) has argued that even in its early years the AC was a major force in changing the culture of the public sector. The AC was (McEldowney, 1993) formed with the explicit remit to change the management culture in local authorities and the AC (1993~) acknowledges its role as ‘a driving force in the improvement of public services’. The Commission also puts great store by its independence from both local authorities and central government (Donaldson, 1993; and AC, 199313); however, Henkel (1991, p. 205) traces the Commission’s ‘public persona’ from being initially value neutral, but by 1988 it had moved ‘to wholehearted endorsement of [government] policies’. The change was particularly marked in the AC’s reports which advise local authorities to embrace the government’s policies on competitive tendering. The perceived neutrality and independence of the AC (Henkel, 1991, p. 222) and the appointed auditors is likely to increase the perception that league tables are a reliable basis on which to judge and to make decisions about local authorities. There is a danger that the AC becomes the final ‘technical expert’ in deciding ‘good management practice’ and as such may assist in keeping politics ‘at the door’ of local authorities (Hopwood, 1985). This raises the question of the appropriateness of audit involvement. The Audit Commission’s position as independent from central government could be compromised by the way the league tables are used; for example, if they were used to justify cuts in funding to some local authorities.

0 Basil Blackwell Ltd. 1995

174 AUDITING PERFORMANCE INDICATORS

How Will Performance Indicators be Used?

There is no general agreement over how far it is possible to measure performance and whether the process will result in improved performance and accountability. For example, Roberts (1990) points out that there is no consensus as to what performance indicators are intended to do. Also, Henkel (1991, p. 193) argues that most public sector performance indicators were ‘originally intended to give central government an overview of local performance, which might be used for policy and resource allocation decisions; they came to be seen equally as tools for local management. The [Audit] Commission, in line with its independent political status, chose to concentrate on the latter function’. Carter et al. (1992, p. 169) suggest that performance indicator systems can respond to a variety of political concerns and stress that they are not neutral technical exercises. In this regard, Klein and Carter (1988) make a useful distinction in the use of performance indicators, categorising them as either ‘dials’ (giving an accurate reading of ‘good’ or ‘bad’ performance) or ‘tin openers’ (which suggest the need for further investigation). They explore the paradox between different uses: ‘Implicit in the use of performance indicators as dials is the assumption that the standards of performance are unambiguous; implicit in the use of performance indicators as tin openers is the assumption that performance is a contestable notion’ (p. 14). The AC in its early advice (AC, 1986) appeared to view performance indicators as ‘tin openers’, stressing that few indicators could be seen as absolute measures of performance and that quality might not be quantifiable. In practice, some indicators have been used as ‘dials’ and have been used to assess performance related pay and to allocate resources, notably in the case of higher education research indicators (Cave and Hanney, 1990). Laughlin (1 992, p. 15) cautions against ranking performance data and suggests that it ‘is but a small step to linking these “league tables” to formula funding resulting in the return to the payment by results system’. In practice, the Further Education Funding Council has introduced a payment by results system (Public Finance, 12 May, 1994) and NHS league tables (NHS Executive, 1994) are being considered as a basis for local performance related pay schemes. Comparative performance indicators may also lead to pressure for politicians and managers to commit more resources to poor performing services and areas in an attempt to improve matters (Smith, 1992).

Inferfiretat ion

Several writers have cautioned on the need for sensitive interpretation of performance indicators and of a tendency towards dysfunctional results (Cave, Cogan and Smith, 1990; Smith, 1988 and 1992; Jowett and Rothwell, 1988; Mayston, 1985; Midwinter, 1994; and Stewart and Walsh, 1994). Smith (1988) warns that performance can vary due to factors other than efficiency,

0 Basil Blackwell Ltd. 1995

BOWERMAN 175

such as different objectives, needs, modes of service delivery or accounting methods.

Several other factors may mean it will be dificult to use performance indicators for evaluation purposes. Firstly, there is the risk that a performance indicator system may not achieve the intended effect. Objectives and reported performance may be manipulated and distorted. As Gray and Jenkins (1993) comment: ‘the systems become part of the manager’s incentive structure and are managed accordingly. This may lead to emphasis by such managers on reporting acceptable performance rather than changing substantive activity; this also encourages managers to manipulate information systems and to redefine organisational politics’.

Secondly, there is no indication of who owns the performance (Klein and Carter, 1988, p. 16) reflected by the indicators. The AC’s list makes no attempt to distinguish which indicators apply to managers, to councillors, to contractors or to central government. Common et al. (1992, p. 78) are concerned that this, combined with confusion about the role of the recipients of performance indicators as consumers or citizens, may reduce the accountability of politicians.

Thirdly, there is no guidance as to the precise status or weight of each standard; does an over-achievement of one target compensate for the under- achievement of others (Pendlebury et al., 1994, p. 45)? In addition, the consequences of breaching that standard have not been indicated (Pollit, 1994). The citizen’s only right is to information, not action or compensation. Stewart (1984, p. 26) claims that ‘while information is of critical importance, it does not constitute the whole of accountability . . . [it] . . . is not in itself the holding to account - more is required’.

These difficulties, combined with the apathy of the public towards local government affairs (e.g. as exhibited in low voting levels in local elections), give rise to the possibility that the performance indicators will make little impact. Day and Klein (1987, p. 234), Smith (1988) and Humphrey et al. (1993) have all commented on the absence of real, practical effects brought about by performance indicators, a view which has been borne out by the lack of public response to local authority league tables published so far.

This section has suggested that the task of reflecting performance is fraught with difficulty. There is a danger that the perceived independence of the AC could give respectability to dubious performance indicators which could affect the political agenda in local government and could, ultimately, be used by central government to exercise greater control over local authorities.

THE CITIZEN’S CHARTER PERFORMANCE INDICATORS

This section examines each of the new roles for the AC and auditors: in defining the indicators; in publishing comparative information; in providing support and guidance; and in auditing the data. In so doing it explores the

0 Basil Blackwell Ltd. 1995

176 AUDITING PERFORMANCE INDICATORS

extent to which the problems surrounding performance measurement and audit domination have been incorporated in the practical approach to implementing the Citizen’s Charter indicators.

Definins the Indicators

The major new role for the AC has been to select those indicators which would best demonstrate performance standards; to allow comparisons the indicators had to be uniform across all local authorities. The AC carried out research into citizens’ interests and consulted widely with consumer groups and service providers before producing its initial list of indicators in September 1992. The definitive list was published in December 1992 (AC, 1992) containing 77 separate indicators. Further indicators, covering additional services, were added in December 1993 (AC, 1993a), bringing the total to over 90 indicators and requiring answers to 275 questions. The list of indicators (AC, 1993a) covers 17 main categories such as ‘dealing with the public’, ‘waste disposal’ and ‘provision of an education service’.’ The majority of indicators deal with activity or cost. Others require details of procedures (e.g. for dealing with complaints) and only a few require performance to be compared to a predetermined target.

The duty of specifying information to be reported and audited marks a major change in the AC’s role. Audit theory and practice makes a tacit assumption that accounting standards and disclosure requirements are separate from the audit process.’ While there may be some merit in involving the auditor in specifying the form of account, the benefits and limitations of such an approach do not appear to have been considered in giving the AC its Citizen’s Charter powers. The AC is, of course, not one and the same with the auditors, so it could be argued that a distinction does exist between standard setter and auditor; however, the relationship between the AC and the local authority auditors mean that the audit process and the AC are inextricably linked.

The AC’s role in specifying indicators also appears to run contrary to the new public management ethos of devolving responsibility and reducing the need for uniform central controls. This ethos would suggest that individual local authorities or service providers should develop their own indicators (as is the case in New Zealand and Canada (Bowerman, 1994), and as proposed for UK central government (Treasury, 1994)); although it is unlikely that British legislation would permit this as it intends performance indicators to be comparable. This involvement reinforces concerns already raised about the interventionism of audit; as Power (1994, p. 33) comments:

If unambiguous standards of auditee performance can be established, then audit simply verifies compliance with such standards . . . From this point of view, the standards of auditee performance are independent of the audit process. Yet the opposite is often the case. Audits do as much to construct definitions of quality and performance as to monitor them,

0 Basil Blackwell Ltd. 1995

BOWERMAN 177

Such concerns are reflected in a Local Government Management Board (1992) report, which argues that the Citizen’s Charter ‘touches fundamentally on the role of local democracy and representation . . . it can lead to detracting from democratic processes if seen in isolation. It can take away the public accountability of local councillors . . . for determining appropriate levels and standards of service, and for failures to rectify problems by actually constricting citizen’s rights to narrowly defined complaints and redress procedures, removing their more “civic” role’.

Cooper (1993, p. 154) sees this as an intended effect of the charter, describing it as ‘a highly political attempt to “de-politicise” social policy. Rather than acknowledge the controversial, highly contested nature of government “reforms”, such changes are normalised and presented as unproblematic developments’. In this way the development of performance indicators for the public sector shares many characteristics with standard costing, including the tendency to ‘normalise performance’ (Roberts and Scapens, 1990, p. 117). Miller and O’Leary (1987, p. 245) implicate standard costing with a ‘politics of efficiency’: ‘Scientific wisdom was used to advance the cause of “good government”, whether at the level of the municipality or the factory. “Democracy” was to mean government for the people, based increasingly on questions of fact, a partnership between experts and the citizen which was essential to good government’.

The Citizen’s Charter indicators give the AC the power to define ‘good’ performance, albeit after consultation, making the AC the final arbitrator of ‘good performance’ in local government. For example, the pre-school education is a discretionary service, a matter of local authority policy; its inclusion as one of the AC’s performance indicators implies that such provision is ‘good’, while ignoring policy decisions about alternative uses of resources.

Publishing Comparative Information

The second statutory task for the AC is to compare the different levels of performance achieved by different authorities and to publish a ‘league table’ of comparative performance. This will follow the publication of details by individual local authorities in a local newspaper by 31 December of the relevant year. The AC is still considering the best way to group authorities to facilitate comparison. There is concern that, if published in a league table format, conclusions will be drawn without taking account of explanations and local factors; the AC has promised that it will seek to minimise such dangers. Local authorities are also suspicious (Local Government Management Board, 1992) about how the indicators will ultimately be interpreted and used. In particular, there is concern that they may be used to set local authority spending levels. It is clear that the Commission is seeking convergence towards best practice (AC, 1993b, p. 4) and many local authorities are suspicious that national standards would be imposed upon them. Decisions

0 Basil Blackwell Ltd. 1995

178 AUDITING PERFORMANCE INDICATORS

about which services to include in the list and how performance would be compared nationally could impose arbitrary national priorities and standards. The AC (1992) has tried to reassure authorities that their report on indicators will not imply that any particular level of service is appropriate.

However, Carter (1989) argues that performance indicators are ideal instruments for exercising central control, that they enable central government to be the ‘back seat driver’ of public services. Opportunities for ‘back seat driving’ will increase considerably when central government has access to comparative audited data on a whole range of local authority activities. It could, for example, use such data to allocate funding or decide ‘appropriate’ service levels. The fact that the AC has designed/arranged the audit and published this data could affect the AC’s reputation as independent from central government. At the same time, the absence of clear ownership of performance creates the impression that poor performance is the responsibility of council members and officers rather than central government.

It is likely to prove difficult to present the information in a meaningful way. Midwinter (1994, p. 41) claims that ‘the basis for sensible and equitable comparison between authorities does not yet exist, and therefore the statutory requirement to undertake comparison on the basis of limited information ought to be withdrawn’. The legislation seems to envisage an evaluation role for the AC which can be likened to that played by an investment analyst in the private sector. However, the private sector companies are being urged to move from reliance on simple but uninformative measures, such as earning per share, and directors are encouraged to produce a more useful operational and financial review. It is ironic that the emphasis on league tables appears to be pushing the public sector in the reverse direction.

At present the indicators deal mainly with activity levels, with ‘how much?’ rather than ‘how well?’. The AC acknowledged (1992) that there are difficulties in dealing with these issues and promised to undertake further research. But even with additional information there may be problems in evaluating performance. Klein and Carter (1988, p. 17) claim that ‘. . . quality is technically extremely difficult to measure in complex services, which is why traditionally reliance has been put on inspectorates rather than statistics’. Taking this to its logical conclusion, it can be argued that performance can never be completely defined and the criteria by which. it is judged can never be finally established (see Stewart and Walsh, 1994).

Providing Support and Guidance

The AC’s third task is non-statutory, but may be seen as a constructive way to support the implementation of the Citizen’s Charter indicators. The AC produced a handbook to explain what factors should be taken into account when reporting each indicator and has sent regular newsletters to local authorities keeping them informed of developments. It established a telephone helpline to deal with local authorities’ queries on definitions and to advise on

0 Basil Blackwell Ltd. 1995

BOWERMAN 179

Table 1

Citizen’s Charter Performance Indicators - Functions of the Audit Commission

~~~ - ~

Function Comparable to

1 . Defining information requirements 2. Publishing league tables 3. Providing advice and support

Standard setter Information provider and evaluator Consultant

the establishment of data collection systems. It developed a good practice guide on the best ways of publishing the information locally. In addition to specified indicators, the AC encouraged local authorities to undertake customer surveys, and intends to develop sets of questions using a standard specification so that survey results will be comparable.

In this role the AC can be likened to a consultant and, taken in conjunction with the other Citizen’s Charter duties of standard setter and evaluator, this may represent an inadequate separation of duties and could give rise to potential role conflicts (Table 1). This conflict should be acknowledged. The new responsibilities add to the Commission’s already wide span of control and add further confusion to its role in relation to the local authority auditors and to the local government stakeholders.

Auditing the Data

The fourth duty applies not to the AC but to the local authority auditors appointed by the Commission. This constitutes an expanded role for UK public sector auditors: that of attesting non-financial information. They are required to certify the information as genuine by reviewing the systems in place to collect the data. This does not involve verification of the data itself. The role of the auditors in checking the reliability of non-financial information is in some ways similar to the audit of annual financial accounts, but the audit concentrates on systems for collection of statistics and not the view given by the statistics.

The role of the auditor is also changed because the auditor’s conclusions about the reliability of data systems are to be presented directly to the local public instead of to council members, as is the case for financial information. In addition, information derived from each local authority will be compiled by the AC into national league tables; this gives the auditor a duty to a wider body of stakeholders than the primary client group.

This section suggests that the AC has been unable to avoid the problems discussed in the previous section. Despite the large number of indicators, they do not adequately reflect performance, in particular effectiveness. Nor has the

0 Basil Blackwell Ltd. 1995

180 AUDITING PERFORMANCE INDICATORS

AC been able to avoid criticism of encroaching on the political decision- making process through the inclusion of discretionary services in the list of indicators. Problems specific to the audit approach have also been highlighted - the fact that the audit relates only to the data systems for each indicator and not to the overall view of performance. The extra Citizen’s Charter responsibilities may also serve to make the AC even more cumbersome and all embracing.

CONCLUSION

This research has examined the work of the AC and auditors in relation to the Citizen’s Charter indicators and explores the difficulties which they are likely to encounter. Much of this analysis is by necessity conjectural, as most local authorities’ indicators and the comparative analysis have not yet been published. Further research is required and could include the usehesponse to the indicators as well as statistical analysis of the published performance indicators on a cross-sectional and trend analysis basis. There is also a need for comparison of the AC’s approach and outcomes compared with the Scottish Accounts Commission and with the National Audit Office in relation to central government.

The conclusions from this paper give an indication of a number of significant problems with the initiative. The first is that the objectives for reporting performance in this way have not been clearly defined and appear to be moving the public sector in the reverse direction from the private sector. Furthermore, it requires the AC to achieve the difficult, if not impossible, task of designing indicators to encapsulate local authority performance. These points suggest that the initiative does not rest on firm intellectual foundations.

The initiative carries risks for both the AC and local authorities. The local authorities become prone to having political debate replaced with the practice of ‘technical managerialism’ as decreed by the AC and of being held accountable for aspects of performance which are outside their control. The AC as the main architects of the indicators, but unsure of how central government will use them, risks taking the blame for any unpopular consequences. The AC also risks becoming unwieldy and unfocused as its sphere of activities widens with insufficient separation of the roles of standard setter, evaluator and consultant. The pure audit aspects of the initiative are fairly limited, concentrating on the systems for collecting the data rather than the view of performance presented by the indicators.

The Citizen’s Charter initiative presented an opportunity to reappraise the role of public audit; to match audit activity to the changing accountability process throughout the public sector and to import new approaches. This opportunity has not been seized. The concerns expressed above suggest that it may have been premature or even inappropriate to involve the external audit process in the design and reporting of performance.

@ Basil Blackwell Ltd. 1995

BOWERMAN 181

NOTES

1 The two Commissions have interpreted their role in different ways and this paper will refer mainly to the role of the Audit Commission for England and Wales. See Midwinter (1994) for a discussion of performance indicators for local government in Scotland. Selected Examples of Performance Indicators Specified by the Audit Commission 2

Activify Performance Indicators

Dealing with the public Time scales within which to answer telephone calls and reply to letters; details of complaints procedures. (Targets set by local authorities and compared to actual performance.) Details of housing stock; time taken to deal with repairs; rent collection/debt statistics; breakdown of average rent into costs; numbers homeless and in hostels, and bed and breakfast accommodation. Details of type of service provided; reliability of household waste collection service; cost per household. Per cent of waste recycled; cost per tonne. Time taken to deal with planning applications; number of appeals. Time taken to process claims/make payments; cost

Provision of housing accommodation

Refuse collection

0 Waste disposal Control over development

Housing and Council Tax benefits of administration. Collection of Council Tax Per cent collected. Provision of an education Per cent of 3- and 4-year-olds in pre-school

education; per cent of unfilled places. The number of 999 calls received and answered within target time; violent crimes per 1,000 population; burglaries per 1,000 dwellings; crimes detected per officer; and per cent of time spent with the public.

service The maintenance of an adequate and efficient police service

3 For example, in the UK the Accounting Standards Board is distinct from the Audit Practices Board; the Treasury, not the National Audit Office, recommends the form of central government disclosures.

REFERENCES

Audit Commission (1986), Performance Review in Local Government - A Handbook for Auditors and

__ (1992), Citizen’s Charter Indicators: Charting a Course (AC, London). ___ (1993a), The Publication oflnfonnation (Standards of Performance) Direction 1993 (AC, London). ___ (1993b), Annual Report 1993 (AC, London). ___ (1993c), Adding Value-stratcgy 1993 (AC, London). Bowerman, M. (1993), ‘Who Audits the Auditors?’, Financial Times (20 May). - (1994), ‘Auditing Performance Indicators - The Role of the Audit Commission in the

Citizen’s Charter Initiative’, EIASM Workshop on Accounting and Accountabilify in the New European Public Sector (Edinburgh, 12- 14 December).

Carter, N. (1989), ‘Performance Indicators: “Back Seat Driving” or “Hands-off Control”?’, Policy and Politics, Vol. 17, No. 2, pp. 131- 138.

-, P. Day and R. Klein (1992), How Organisations Measure Their Success - The Use of Pwformnnce Indicaton in Government (Routledge, London).

Cave, M. and S. Hanney (1990), ‘Performance Indicators for Higher Education and Research’, in M. Cave, M. Cogan, R. Smith (eds.), Output and Perfnnance Measurement in Government - The State ofthe Art Uessica Kingsley Publishers, London), pp. 59-85.

Local Authorities (HMSO, London).

0 Basil Blackwell Ltd. 1995

182 AUDITING PERFORMANCE INDICATORS

Cave, M., M. Cogan and R. Smith (1990), Output and Performance Measurement in Govmment - The

Common, R., N. Flynn and E. Mellon (1992), Mannging Public Services: Competition and

Cooper, D. (1993), ‘The Citizen’s Charter and Radical Democracy’, Social and Legal Studies,

Day, P. and R. Klein (1987), Accountabilitics: Five Public Services (Tavistock, London). Dewar, D. (1986), ‘The Auditor General and the Examination of Policy’, International Journal of

Donaldson, L. (1993), ‘A State of Independence’, The Independent (16 December), p. 32. Geist, B. and N. Mizrahi (1991), ‘State Audit: Principles and Concepts’, in A. Friedberg,

B. Geist, N. Mizrahi and I. Sharkansky (eds.), Slate Audit and Accountability; A Book of Readings (State of Israel, State Comptroller’s Office, Jerusalem), pp. 16- 41.

Gray, A. and B. Jenkins (1993), ‘Codes of Accountability in the New Public Sector’, Accounting, Auditing and Accountability Journal, Vol. 6, No. 3, pp. 52-67.

Hamburger, P. (1989), ‘Efficiency Auditing by the Australian Audit Office - Reform and Reaction Under Three Auditors-General’, Accounting, Auditing and Accountability Journal, Vol. 2, No. 3.

Slate of the Art (Jessica Kingsley Publishers, London).

Decentralisation (Butterworth Heinemann).

Vol. 2, No. 2 (Sage Publications, June), pp. 149- 172.

Govmment Auditing, Vol. 13, No. 2 (April), pp. 14-16.

Henkel, M. (1991), Gouernmcnt, Evaluation and Change (Jessica Kingsley Publishers, London). Hopwood, A.G. (1985), ‘Accounting and the Domain of the Public: Some Observations on

Current Developments’, the Price Waterhouse public lecture on accounting (University of Leeds), reprinted in J. Guthrie, L. Parker and D. Shand (eds.), The Public Sector - Contemporary Readings in Accounting and Auditing (Harcourt Brace Jovanovich Publishers, Australia).

Humphrey, C., P. Miller and R.W. Scapens (1993), ‘Accounting, Accountability and the New UK Public Sector’, Accounting, Auditing and Accountability Journal, Vol. 6 , No. 3, pp. 7-29.

Jowett, P. and M. Rothwell(1988), Perrformance Indicators in the Public Sector (Macmillan, London). Klein, R. and N. Carter (1988), ‘Performance Measurement: A Review of Concepts and Issues’,

in D. Beeton (ed.), Performance Measurement: Getting the Concepb Right (Public Finance Foundation, London), pp. 5-20.

Laughlin, R . (1992), ‘Accounting Control and Controlling Accounting: The Battle for the Public Sector?’, University of Sheffield Management School Discussion Paper No. 92.29.

Local Government Management Board (1992), Citizens and Local Democracy: Charting a New Relationship (LGMB, Luton).

McEldowney, J.F. (1993), Contract Compliance and Public Audit as Regulatory Strategies in the Public Sector, a paper presented at the Citizen’s Charter Conference, Legal Research Institute (University of Warwick, 23 September).

McSweeney, B. (1988), ‘Accounting for the Audit Commission’, Political Quarterly, Vol. 59, No. 1 Uanuary-March), pp. 28-44.

Mayston, D. (1985), “on-Profit Performance Indicators in the Public Sector’, Financial Accountability C8 Managment, Vol. 1 , No. 1 , pp. 51 - 74.

Midwinter, A. (1994), ‘Developing Performance Indicators for Local Government: The Scottish Experience’, Public Monty and Management, Vol. 14, No. 2 (April-June), pp. 37-43.

Miller, P. and T. O’Leary (1987), ‘Accounting and the Construction of the Governable Person’, Accounting, Organisations and Society, Vol. 12, No. 3, pp. 235-265. - and M. Power (1992), ‘Accounting, Law and Economic Calculation’, in M. Bromwich

and A. Hopwood (eds.), Accounting and the Law (Prentice Hall and ICAEW), pp. 230- 253. NHS Executive (1994), The Patients Charter: Hospital and Ambulance Seruice Comparative Performance

Guide 1993-94 (Department of Health). Pendlebury, M. and O.S. Shreim (1990), ‘UK Auditors’ Attitudes to Effectiveness Auditing’,

Financial Accountabilify C8 Management, Vol. 6, No. 3, pp. 177-189. -, R. Jones and Y. Karbhari (1994), ‘Developments in the Accountability and Financial

Reporting Practices of Executive Agencies’, Financial Accountability tY Management, Vol. 10, No. 1 (February), pp. 33-46.

Pollit, C. (1994), ‘The Citizen’s Charter: A Preliminary Analysis’, Public Money and Management, Vol. 14, No. 2 (April-June), pp. 9-14.

Power, M. (1994), The Audit Explosion, DEMOS Paper No. 7 .

0 Basil Blackwell Ltd. 1995

BOWERMAN 183

Roberts, H. (1990), ‘Performance and Outcome Measures in the Health Service’, in M. Cave, M. Kogan and R. Smith (eds.), Output and Pe$ormance M e a s u r m t in Goumment: The Staft of the Art (Jessica Kingsley Publishers, London), Ch. 6, pp. 86- 105.

Roberts, J. and R. Scapens (1990), ‘Accounting as a Discipline’, in Cooper and Hopper (eds.), critical Accounfs (Macmillan, London), pp. 107- 125.

Rose, N. (1991), ‘Governing by Numbers: Figuring out Democracy’, Accounting, Organisations and Society, Vol. 16, No. 7, pp. 673-692.

Smith, P. (1988), ‘Assessing Competition Among Local Authorities in England and Wales’, Financial Accountability €9 Management, Vol. 4, No. 3 (Autumn), pp. 235-252. - (1992), ‘Negative Political Feedback: An Examination of the Problem of Modelling

Political Responses in Public Sector Effectiveness Auditing’, Accounfing, Auditing and Accountability Journal, Vol. 5, No. 1, pp. 5-20.

Stewart, J. (1984), ‘The Role of Information in Public Accountability’, in Hopwood and Tomkins (eds.), Zssus in Public Secfor Accounting (Philip Allen, London).

Stewart, J. and K. Walsh (1994), ‘Performance Measurement: When Performance Can Never Be Finally Defined’, PublicMoney and Managmunt, Vol. 14, No. 2 (April-June), pp. 45-49.

Treasury (1994), Better Accounting for the Tar Payer’s Money: Resource Accountiq and Buetting in Goucmment, Cm 2626 (HMSO, London).

0 Basil Blackwell Ltd. 1995