14
M. Kompf, P. M. Denicolo (Eds.), Critical Issues in Higher Education, 23–13. © 2013 Sense Publishers. All rights reserved. GILL CLARKE 3. DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK ABSTRACT This chapter provides an overview of doctoral assessment in the UK in the 21st century and makes some international comparisons. It covers mainly the final assessment although also touches on progress and review. The chapter suggests some of the influences that have had an impact on doctoral assessment and the standards of research degrees as set out in some of the key regulations and guidance that exist for doctoral education in the UK and elsewhere. It covers the extent to which consistency of the doctoral assessment process can be expected, given the diversity and individuality of doctoral degrees, the way in which diversification of doctoral degree structure and content has affected assessment, and emphasises the common expectations of doctoral outcomes, irrespective of qualification and form of output (thesis, portfolio, artefact, etc.). It affirms that the principal method of doctoral assessment in the UK remains the thorough evaluation of the candidate’s ‘output’ (thesis/dissertation, artefact, or portfolio), by two or more examiners (normally three if the candidate is a member of academic staff at the institution where s/he is studying), and an oral examination commonly called the viva voce, or viva, during which the examiners question the candidate about his/her thesis. The chapter also addresses the use of assessment criteria in doctoral examining and the ways in which institutions have adopted additional criteria to reflect subject needs. It explores in detail the guidance behind the two indicators about assessment in the UK Quality Code for Higher Education, Chapter B11: Research degrees. 1 The conclusions point to some positive developments in UK doctoral assessment while acknowledging that the fundamental process and purpose of the final assessment of doctoral candidates has not changed. INTRODUCTION The aim of the author in this chapter is to provide an overview of doctoral assessment in the UK in the 21st century, with reference to UK and European regulatory and guidance frameworks and making a few international comparisons for wider context. The chapter is mainly about the final assessment of the doctorate, rather than the assessment that occurs during the programme, such as the annual review of progress or transfer of status that commonly occurs to mark

Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

Embed Size (px)

Citation preview

Page 1: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

M. Kompf, P. M. Denicolo (Eds.), Critical Issues in Higher Education, 23–13. © 2013 Sense Publishers. All rights reserved.

GILL CLARKE

3. DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

ABSTRACT

This chapter provides an overview of doctoral assessment in the UK in the 21st century and makes some international comparisons. It covers mainly the final assessment although also touches on progress and review. The chapter suggests some of the influences that have had an impact on doctoral assessment and the standards of research degrees as set out in some of the key regulations and guidance that exist for doctoral education in the UK and elsewhere. It covers the extent to which consistency of the doctoral assessment process can be expected, given the diversity and individuality of doctoral degrees, the way in which diversification of doctoral degree structure and content has affected assessment, and emphasises the common expectations of doctoral outcomes, irrespective of qualification and form of output (thesis, portfolio, artefact, etc.). It affirms that the principal method of doctoral assessment in the UK remains the thorough evaluation of the candidate’s ‘output’ (thesis/dissertation, artefact, or portfolio), by two or more examiners (normally three if the candidate is a member of academic staff at the institution where s/he is studying), and an oral examination commonly called the viva voce, or viva, during which the examiners question the candidate about his/her thesis. The chapter also addresses the use of assessment criteria in doctoral examining and the ways in which institutions have adopted additional criteria to reflect subject needs. It explores in detail the guidance behind the two indicators about assessment in the UK Quality Code for Higher Education, Chapter B11: Research degrees.1 The conclusions point to some positive developments in UK doctoral assessment while acknowledging that the fundamental process and purpose of the final assessment of doctoral candidates has not changed.

INTRODUCTION

The aim of the author in this chapter is to provide an overview of doctoral assessment in the UK in the 21st century, with reference to UK and European regulatory and guidance frameworks and making a few international comparisons for wider context. The chapter is mainly about the final assessment of the doctorate, rather than the assessment that occurs during the programme, such as the annual review of progress or transfer of status that commonly occurs to mark

Page 2: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

24

different stages of the candidate’s development. However, progress and review arrangements are also touched upon for completeness. Assessment is at the heart of doctoral degree standards and the doctoral assessment or examination is the point at which the candidate’s achievements and research-relevant attributes are tested. The final assessment takes account of both the candidate’s output (thesis or equivalent) and outcomes (personal skills and abilities). In the last decade, the UK doctorate and how it is assessed has come under scrutiny by governments, funding bodies and other higher education (HE) sector organisations. There are several possible reasons for this, including:

1. the growth in student numbers generally in UK higher education (including doctoral candidates)2 and in graduates entering the job market, and the related interest of politicians, economists, employers, the media and HE sector bodies in the quality and academic standards of degrees and graduates. Initially undergraduate and master’s level programmes were the focus of attention; interest in the doctorate and its outcomes intensified in the early years of the 21st century;

2. the increasing ‘globalisation’ of higher education, especially research degrees, rising student expectations and higher levels of student mobility;

3. linked with globalisation, the need for the UK to remain competitive internationally in recruiting to doctoral programmes (Kemp et al., 2008).

This growing interest in the doctorate has led to greater regulation of doctoral degrees by influential higher education sector bodies such as the funding councils (Metcalfe et al., (2002), Higher Education Funding Council for England (HEFCE), 2003a, 2003b) and the Quality Assurance Agency (QAA), and the related strengthening of university policy and guidance. In the UK these moves began as early as the 1980s (Committee of Vice-Chancellors and Principals (CVCP), 1988); they have also occurred in continental Europe and beyond and have led to changes in the structure and organisation, the assessment and the outcomes of UK doctoral degrees. The assessment of doctoral degrees features either explicitly or implicitly in most of the policy developed during this 20-year period, but the most important UK reference points for the purpose of this chapter are: Chapter B11: Research degrees of the UK Quality Code (QAA, 2012: indicators 16-17; previously section 1 of the QAA Code of Practice (QAA, 2004: precepts 22–24)); the Framework for higher education qualifications in England, Wales and Northern Ireland (FHEQ) (QAA, 2008: 23–25; 39); and the Framework for qualifications of higher education institutions in Scotland (QAA 2001: doctoral qualification descriptor). These documents are relevant to the assessment of the doctorate and provide benchmarks for doctoral degrees of all kinds in universities.

Page 3: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

25

QUESTIONS OF CONSISTENCY IN THE DOCTORAL ASSESSMENT PROCESS

Perhaps because of the common currency and high academic status of the doctorate (Johnston, 1997), it is implicitly accepted in higher education that the doctoral assessment process is reasonably consistent, across subjects, universities and countries, and that this assures the comparability of doctoral graduate achievements and outcomes. In fact, doctoral assessment is subject to variability (Tinkler and Jackson, 2004:2) and has been criticised for this and for its lack of transparency (Jackson and Tinkler, 2001), because of the closed nature of the examination, especially when compared with the public defence that often takes place in continental European countries (Green and Powell, 2005:224). Others continue to support the rigour of the UK final doctoral assessment while recognising its imperfections. A degree of inconsistency in the doctoral assessment process is perhaps inevitable given the nature of the doctorate and the variables involved, which change on each occasion a doctoral assessment takes place. For example: ● each assessment is of a single candidate, rather than a cohort of students, and by

definition the candidate will have produced a unique output; ● the examiners are different; they have been chosen explicitly for their expertise

in the candidate’s research area but may have diverse perspectives on the topic which inform their judgement of the thesis and the candidate’s performance at the viva;

● each subject or field has particular expectations of what a successful doctoral graduate should have achieved and be able to do (output and outcomes); where the field is interdisciplinary, the examiners will each need to accommodate one another’s discipline perspectives;

● each university has its own regulations and guidance for the doctoral examination and for examiners of doctoral candidates; and

● each candidate has his/her own strengths and weaknesses: some will produce a strong thesis/output but be less adept at defending their position in the viva, while others might not excel at written or practical work but will prove in the viva that they have mastery of the field.

However, it can also be argued that although the process itself can vary for the reasons summarised above, other elements of doctoral assessment are more consistent, such as the assessment criteria used by institutions in their examination regulations for doctoral degrees, with most looking for an original contribution to knowledge. Institutions responding to a QAA discussion paper in 2007 (QAA, 2007a) when defining ‘originality’ used phrases such as ‘original thought’, ‘original findings’, ‘substantial original contribution to knowledge and understanding’, ‘reaching an appropriate intellectual level, including the ability to create new knowledge’, and ‘independent critical power’. Typically, ‘originality’ can be construed either as new knowledge/discovery of new facts arising from an individual’s research or creativity (in some disciplines involving experimentation

Page 4: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

26

and interpretation of results; in others through the creation of, for example, a work of art, including a written work, or a musical composition), or the application of existing knowledge in a way that provides new insights into the subject, e.g. through using different approaches or methodology. (Clarke, 2009). Much has been written, in the UK and elsewhere, about the doctoral assessment process, from both policy and practice perspectives (Tinkler and Jackson (2004), Jackson and Tinkler (2001), Denicolo (2000), Hall (2006), Morley (2004), Morley et al., (2002), and Johnston (1997)). Authors on doctoral assessment do not normally suggest that the variables inherent in the process necessarily lead to variation in doctoral standards or outcomes, but do recognise the expectations of different subjects and fields in the assessment of doctoral candidates, for example, Lovitts (2006, 2007), Tinkler and Jackson (2004), and Johnston (1997). The principal method of doctoral assessment in the UK remains the thorough evaluation of the candidate’s ‘output’ (thesis/dissertation, artefact, or portfolio), by two or more examiners (normally three if the candidate is a member of academic staff at the institution where s/he is studying), and an oral examination commonly called the viva voce, or viva, during which the examiners question the candidate about his/her research. As well as testing the candidate’s knowledge of the field and ability to defend his/her thesis, the viva also provides an opportunity to evaluate the candidate’s personal skills and abilities as a researcher – all these contribute to the ‘outcomes’ for the individual in having completed the doctorate successfully. Very few UK doctorates diverge from this approach. Hall (2006), who compares practices relating to the assessment of Ph.D. theses in Australia, Canada and the US, with reference to the UK, shows that in Canada, the US and the UK an oral defence of the thesis or dissertation is normally compulsory and considered fundamental to the final assessment, i.e. evaluation of the candidate’s output is by itself not enough; the outcomes are equally important; to enable the examiners to make a final judgement, the candidate needs to defend his/her research in person.

DIVERSIFICATION OF THE DEGREE AND IMPLICATIONS FOR DOCTORAL OUTPUTS AND ASSESSMENT

This chapter would be incomplete if it did not refer to some of the developments in the structure and content of the degree which have had an impact on graduate outcomes. Diversification of the doctorate has led to developments and innovation in assessment practice which often reflect the nature, structure and purpose of the degree, take account of discipline-specific achievements of doctoral candidates, including those in multi-disciplinary fields, and draw on doctoral assessment practice in other countries. For many years the D Phil or Ph.D., as it is more commonly known, was the only UK doctoral qualification, but during the last twenty years the form of the doctorate has evolved, leading to differently structured degrees to accommodate the needs of a diverse student population and of the needs of different professions. Professional and practice-based doctorates have a variety of structures and attract

Page 5: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

27

candidates at different stages of their careers. Titles of these degrees include, for example, Doctor of Education (EdD), Doctor of (or in) Engineering (EngD), Doctor of Clinical Psychology (DClinPsy), etc. Initially, and beginning with the EdD, professional doctorates in different subjects had a significantly different structure from the Ph.D., traditionally based entirely on independent enquiry by the candidate. Professional doctorates have commonly included structured elements, with an emphasis on acquiring professional skills as well as conducting original research. Practice-based doctorates also contain structured elements and are often to be found in business and clinical settings, as well as in a variety of other environments. Practice-based doctoral candidates are often, but not always, mid-career and carry out much of their research in the workplace, which may lead directly to organisational or policy-related change (Costley, in UKCGE 2011). Over the last ten to 15 years the structures of Ph.D.s and professional and practice-based doctorates have become more similar, and all doctoral programmes now include a significant amount of formal research training which includes development of personal as well as subject-specific research skills. Diversification of the doctorate, together with new technologies, has enabled candidates to present the results of their research in different media and formats. Evolution of doctoral degree structures, use of multi-media, the needs of the professions and in many cases the candidate’s reasons for pursuing doctoral research, have led to a variety of outputs from doctoral degrees. Doctoral candidates traditionally produce a thesis, sometimes also known as a dissertation, but there is a variety of outputs, often affected by the candidate’s field and the nature of the degree. For example, in engineering or economics, it is often appropriate for the candidate to produce a portfolio of materials; an engineering doctoral graduate may have produced a portfolio that reflects different aspects of the research. It might include video, technical drawings and designs, mathematical formulae, reports on the outcomes of testing designs, etc. A graduate from a performing arts doctorate might have produced a portfolio very different in nature from the engineering portfolio, having written/directed a play, produced one or more videos of performances, transcribed interviews with actors, etc. In fine art, an artefact such as sculpture or other work of art may be produced. Cyr and Muth (2006), address some of the benefits for using portfolios in doctoral education and also suggest that to assure consistency, validity and reliability in assessment they require design and development work. In cases where communicating the research involves more than writing a document (for example, producing an artefact), the candidate is required to produce a substantial commentary to accompany the output, providing a critical evaluation of the work, often produced over a long period of time, in the relevant research contents, providing a rationale for the research methods and methodology, situating his/her research in the field and making clear why the research provides an original contribution to knowledge, or to the application of knowledge. Most UK universities permit candidates to register for doctorates by publication; this enables a collection of previously published work to be brought together into a doctoral thesis normally supported by a curriculum vitae, evidence of ownership of

Page 6: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

28

the intellectual property of the work, and an overarching substantial commentary, linking the published work and outlining its coherence and significance to the field of study. This approach is similar to the portfolio and can accommodate diverse disciplines. A Ph.D. by concurrent publication is now permitted by some institutions, particularly in science and engineering subjects, where doctoral candidates present a portfolio of interconnected research papers published during the candidate’s Ph.D. candidature and a substantial commentary, as already indicated. Institutions have adapted their assessment criteria and in some cases made them more specific to accommodate the evaluation of diverse doctoral outputs. For example, some institutions specify the submission of a portfolio for the assessment of some degrees. Others adapt their definitions of ‘originality’ to the nature of the subject(s), ensuring they make explicit the doctoral attributes for different degrees and helping to assure consistency of output standards across doctoral programmes with different qualification titles.

USE OF ASSESSMENT CRITERIA IN DOCTORAL EDUCATION

One of the developments in UK higher education at all levels since the early 1990s has been the increasing use of intended student learning outcomes (ilos) and assessment criteria that test the ilos. Biggs and Tang (2007) address the importance of linking assessment criteria to ilos as part of their thesis about constructive alignment and its positive effects on deepening student learning. When the UK Quality Assurance Agency for Higher Education (QAA) first introduced a Framework for Higher Education Qualifications (FHEQ) in 2001 (QAA, 2001, revised 20083) as part of the ‘Academic Infrastructure’4 it contained the first UK doctoral qualification descriptor, that is, a summary of the intellectual attributes and characteristics that should be present in individuals awarded a UK doctorate. The doctoral qualification descriptor has been used to inform many institutions’ assessment criteria for doctoral qualifications and has stood the test of time: when institutions were consulted on their views about the doctoral qualification descriptor (QAA, 2007a), most did not want to make changes, major or otherwise. When asked whether the attributes of doctoral graduates summarised in the qualification descriptor still applied, 66 (92%) of the respondents said yes, they did still apply. However, seven respondents said that they did not sufficiently recognise the abilities of graduates of professional doctorates (Clarke, 2009). As a result of the consultation, the 2008 version of the doctoral qualification descriptor remains identical to the original, but several paragraphs of explanation have been added to augment some of the statements, for example to further define ‘originality’ and to emphasise the qualities needed for employment. Similar doctoral qualification descriptors exist, for example, in the European Higher Education Area Qualifications Framework (Bologna Working Group on Qualifications Frameworks, 2005), and in the Australian Qualifications Framework (AQF, 2011). When, as part of the same QAA consultation in 2007, respondents answered two questions about the use of assessment criteria, it was clear that some institutions

Page 7: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

29

thought it appropriate to have separate assessment criteria for different doctoral qualifications. These typically reflect the content, structure, output and purpose of the degree programme and subject considerations may also be taken into account. It was also apparent that some fundamental criteria, such as ‘originality’, ‘substantial original contribution to knowledge and understanding’, ‘independent critical thought’ are common to all doctorates, supporting the view of equivalence and consistency of standards across programmes.

DEVELOPMENTS IN DOCTORAL ASSESSMENT LINKED TO THE UK QUALITY CODE FOR HIGHER EDUCATION, CHAPTER B11: RESEARCH DEGREES

Chapter B11: Research degrees of the UK Quality Code contains two indicators (numbers 16 and 17), on the topic of the assessment of research degrees5. These indicators are supported by explanatory paragraphs that summarise the thinking behind the indicators and explain why they are important, often expanding on overarching statement to provide examples of effective practice. The indicators and their accompanying explanation focus on: (16) the need for research degree awarding bodies to use assessment criteria that define academic standards and graduate achievement, and to demonstrate and differentiate between the expected outcomes of different research programmes; (17) the importance of clear assessment procedures and the requirement for these to be operated rigorously, fairly and consistently, in a timely manner, with input from an external examiner; and good communication of assessment procedures to all parties: candidates, supervisors and examiners. Some of the detail in the explanatory paragraphs contains advice and guidance that has led to changes in institutional practice, as follows. Indicator 16 emphasises the importance of clear institutional guidance and regulations about naming research qualifications and of using assessment criteria, supporting the intended learning outcomes approach mentioned above, cross-referring to the FHEQ. It stresses the need for criteria to enable students ‘to show the full extent of their abilities and achievements at the level of the qualification for which they are aiming’ and has encouraged institutions to be explicit about doctoral degree standards in information provided for students and staff. Indicator 17 and its explanatory notes contains several of the elements that have led to developments in assessment practice in UK institutions. It suggests some of the common features of research degree assessment in UK institutions that are generally considered to demonstrate effective practice. These include: ● confirmation of the importance of the two components of the final assessment:

evaluation of the student’s body of work and the oral examination; ● the expectation that at least two ‘appropriately qualified’ examiners should be

appointed, at least one of whom is an external examiner; ● explicit guidance that none of the student’s supervisors should act as an

examiner;

Page 8: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

30

● a clear steer that conflicts of interest are to be avoided by ensuring that researchers who have had a substantial co-authoring or [other] collaborative involvement in the candidate’s work or whose own work is the focus of the research project are not appointed as examiners; and

● a statement that examiners should submit separate, independent reports before the viva and a joint report afterwards.

One or two of these statements were initially problematic for a minority of institutions, demonstrated by the QAA review of research degree programmes that took place in England, Wales and Northern Ireland in 2005–06. The review report for England and Northern Ireland states that:

‘A significant number of [the] institutions [reviewed] were asked to review their practice of allowing a supervisor, member of a supervisory team, or external collaborator in the research, to act as internal examiner, a practice that would not appear to meet the spirit of the Code of practice. Others were asked to review their arrangements for the oral examination of members of staff. One institution was asked to review its requirement for the supervisor to be present (not as an examiner) at the oral, with or without the student’s agreement. Evidently practice in formulating the membership of such examining bodies is variable across the UK, some some elements of practice seeming to fall outside the guidance of the Code.’ (QAA, 2007b)

Although not mentioned in the QAA review, the submission of examiners’ reports pre-viva was a new procedure for some universities, as was the submission of joint reports after the examination. It is probable that, given the volume of doctoral examinations and the variety of practice that exists across institutions, these two requirements for examiners’ reports might not yet have been universally implemented. In Indicator 17, the explanatory text emphasises the need for institutions to assure themselves that viva voce examinations are conducted fairly and consistently, and suggests that one way of doing this is to appoint an independent, non-examining chair. Responses to the QAA discussion paper (QAA, 2007a), three years after publication of section 1 of the Code, suggest that this practice is growing. Of the 33 respondents whose answers have been analysed, 20 said they had between one and three years’ experience of using independent chairs for doctoral examinations and in some institutions, the practice was of longer standing. A range of benefits were cited, including: helping to ensure the exam proceeded in an orderly manner according to the institution’s regulations; providing evidence should there be an appeal or other need to refer back to the event; and safeguarding students against unfair treatment. Seven other institutions said they did not use independent chairs in oral exams for reasons of resources and lack of clarity about the role; two said they used alternative forms of monitoring (post-viva survey of doctoral candidates and audio-taping); and four did not comment. (Clarke, 2009). The explanatory text accompanying Indicator 17 emphasises the importance of clarity of communication with all parties about the assessment requirements for doctoral programmes. It covers the value to students in receiving support for the

Page 9: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

31

oral examination, with opportunities for a ‘mock’ viva or similar in advance of the real thing. Indicator 17 also recommends that institutions should consider whether students should routinely be given a copy of their examiners’ reports and if so, whether they should receive only the final report or the separate, independent examiners’ reports also. It appears that there remains a variety of practice in this area across institutions, both in the UK and more widely. Further monitoring of institutions’ adherence to the indicators in Chapter B11: Research degrees of the Code is ongoing through QAA Insitutional Review.

PROGRESS AND REVIEW ARRANGEMENTS

Section 1 of the Code of Practice (QAA, 2004), recently replaced by Chapter B11: Research degrees of the UK Quality Code, seems to have had an impact on increased attention to student progress and review arrangements, no doubt supported by encouragement from the UK research councils (who fund more than 30% of doctoral studentships in the UK) and institutions’ recognition of the role of progress and review procedures in improving the submission and completion rates of doctoral candidates. Indicator 13 and its explanatory notes (previously precepts 15 to 17 of section 1 of the Code of Practice) go into considerable detail about how to support students and monitor their progress, including providing clear statements about minimum and maximum periods for completion, the importance of flexible yet firm advice from the supervisory team, and the need for regular progress and review meetings between candidates and supervisors and progress review panels. Good record-keeping is considered an important feature of progress and review, often supported by virtual learning environments that enable all involved to track progress. The QAA Review report for England and Northern Ireland (QAA, 2007b) is positive about reviewers’ findings regarding progress and review arrangements. Three examples of good practice (i.e. practice good enough to encourage others to adopt it) were found, with reviewers ‘noting especially the integration of the review of RDP provision into the general annual and periodic review processes’ (para. 47). In paragraph 48, good practice is again referred to: ‘The review teams noted that annual reviews of some description were now virtually universal in the institutions participating in the review’, but is qualified by identification of some intra-institutional inconsistency: ‘The main feature requiring attention is the lack of consistency between departments or faculties within a single institution in the conduct of annual reviews.’ Paragraphs 49 to 53 summarise common good practice found in institutional progress and review arrangements, summarised in brief below, with few weaknesses being identified:

● The use of a formal upgrading process either once or twice during the programme, using a panel ‘empowered to make recommendations to an institutional or faculty research committee’. (para. 49)

● The requirement for the candidate to submit a formal report about half way through the first year so the department can assess his/her progress. (para. 50)

Page 10: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

32

● The use of monitoring reports that include sections on ‘the integration of generic skills and academic progress’; the use of an annual review panel that excludes the supervisor; the use of a third party monitoring system enabling students to discuss their progress with a third party who is neither the supervisor nor head of discipline. One criticism in this area is that some institutions should introduce more independence in their annual monitoring procedures. (para. 51)

● The ability to differentiate between regular, informal meetings between supervisor and candidate and the more formal type of meeting such as would take place in annual review, and the importance of the student understanding the difference. (para. 52)

● Good use of record-keeping by students and supervisors, with reviewers finding that the practice of students keeping log books of meetings (especially using electronic tools) is becoming well-established. In some institutions, records of meetings need to be more systematic, or to be improved.

CONCLUSIONS

This chapter has summarised some of the main developments affecting the assessment of the UK doctorate in recent years. It is difficult to generalise because of the individuality of the doctoral examination, but during the last ten years it would be fair to say that the following positive outcomes have occurred:

● Institutions have taken steps to introduce greater clarity than previously in their regulations and guidance for the assessment of doctoral candidates, embracing Indicators 16 and 17 of Chapter B11 of the UK Quality Code; this has led to greater theoretical understanding of doctoral degree standards and intended learning outcomes for doctoral candidates;

● The diversification of doctoral programmes and variety of outputs from doctoral degrees that reflect the nature and content of candidates’ research has enabled researchers to present and communicate their work in formats and media appropriate to the research topic;

● Global developments, researcher mobility and staff and student exchanges are enabling and encouraging the UK to benchmark its doctoral assessment criteria, intended learning outcomes and doctoral degree standards with other countries, especially those with similar doctoral degree descriptors.

● One of the very interesting outcomes of the funding councils’ and QAA’s consultation with the higher education sector in the early 2000s was the alacrity with which institutions adopted the principles and guidance and their enthusiasm for being able to provide evidence of high quality and consistent academic standards in research degrees.

It is more difficult to find evidence to answer the question: ‘How has the practical assessment of doctoral candidates changed in the last decade?’ Little empirical evidence exists, but it is possible that doctoral examinations, particularly the viva, have become fairer and more consistent with:

Page 11: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

33

● the increasing use of independent chairs or observers and recording of oral examinations; and

● the ways in which examiners’ reports are used and made available to candidates, helping them to prepare better for the oral exam and to have more detailed information about any corrections required.

Perhaps these developments are helping to address earlier criticisms of lack of transparency and the ‘closed’ nature of the doctoral assessment process?

Most of the evidence suggests that the UK doctoral assessment process remains rigorous; however, some would welcome adoption of a more open oral examination similar to those conducted in continental Europe, while recognising that there is significant variation across individual countries in the EU. The other question, which first arose in the UK in 1988 (CVCP, 1988) is whether or not the doctorate should be graded, as is the case in some other European countries (for example, France (Green and Powell, 2005) and Finland). From informal conversations with a few experienced doctoral examiners, it is clear that there are varied views, both positive and negative, about the grading of doctoral candidates in the final assessment. One of the arguments against grading is that because of the diversity that exists across subjects and candidates, it would potentially be unfair to implement a grading system across all doctorates, and some think that, at doctoral level, pass or fail, with the option for some who do not quite achieve doctoral level of receiving a lower degree, is sufficient. In the author’s opinion, the fundamental process and purpose of the doctoral assessment has not changed but the format of outputs has, with institutions responding to the changes in output by adapting assessment criteria to reflect subject fields, while maintaining the consistent standards that are clearly set out in UK regulatory and guidance frameworks, both UK-wide and at institution level. Examiners are accommodating and adapting to different outputs, especially with the increasing interdisciplinarity in research supported by the UK research councils and the introduction of centres for doctoral training (Denicolo et al., 2010). There is an encouraging degree of complementarity between doctoral qualification descriptors across different countries. This is the basis for many assessment criteria for doctoral degrees, although it is not clear to what extent examiners of doctoral candidates explicitly use these when examining. An aspect of UK doctoral assessment that would benefit from further empirical exploration might be the weighting between output and outcomes of doctorates applied by examiners when assessing individual candidates and the perceived shift to more importance being given to the qualities and potential of the person emerging from the doctorate (Bogle, 2010). The employment rates of UK doctoral graduates and their actual destinations show that they are valued and sought after globally (Kemp et al., 2008). It could therefore be argued that this shows the assessment of UK doctorates continues to maintain high standards of training and output from our doctoral programmes.

Page 12: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

34

NOTES

1 Chapter B11 replaces section 1: Research degree programmes of the QAA Code of Practice, published in 2004. Section 1 introduced substantial additional guidance for the higher education sector on the management and delivery of research degrees; much of the content of section 1 is retained in the new Chapter B11.

2 Figures quoted in a recent UK publication ‘One Step Beyond: Making the most of postgraduate education’ (BIS, 2010), show that between 2002–03 (26,900) and 2008–09 (30,735) there was a 14% increase in the numbers of entrants to doctorates and research masters programmes in the UK.

3 The 2008 version of the Framework is at: http://www.qaa.ac.uk/Publications/InformationAndGuidance/ Documents/FHEQ08.pdf

Also in 2001 the Framework for Scottish HE Qualifications was introduced. http://www. qaa.ac.uk/Publications/InformationAndGuidance/Documents/FHEQscotland.pdf. The Scottish Framework has not been revised and the 2001 version is still current.

4 The Academic Infrastructure (AI), which is in four parts, was introduced as a direct result of the recommendations in the report of the Committee of Inquiry into Higher Education (the ‘Dearing’ report, 1997); the FHEQ is one of four components. The AI is in the process of being replaced by a ‘Quality Code’ that will encompass all four components of the existing AI. Details are at: http://www.qaa.ac.uk/

5 In June 2012, Chapter B11 replaced section 1 of the QAA Code of Practice: Research degree programmes.

REFERENCES

Australian Qualifications Framework Council (2011). Australian Qualifications Framework [Online]. First edition, July 2011. pp. 61–64. http://www.aqf.edu.au/ (accessed 23 July 2012).

Biggs, J.B. and Tang, C. (2007). Teaching for Quality Learning at University. 3rd ed. Maidenhead: Open University Press, McGraw Hill.

Bogle, D. (2010). Doctoral degrees beyond 2010: Training talented researchers for society. Leuven: League of European Research Universities.

Bologna Working Group on Qualifications Frameworks (2005). A Framework for Qualifications of the European Higher Education Area. Copenhagen, Ministry of Science, Technology and Innovation. Appendix 8. http://www.bologna-bergen2005.no/Docs/00-Main_doc/050218_QF_EHEA.pdf

Clarke, G. (2009). Summaries of responses to QAA discussion paper about doctoral programmes. Published electronically by QAA on their pages on ‘The doctoral qualification’ at http://www.qaa. ac.uk/AssuringStandardsAndQuality/Qualifications/doctoral/Pages/discussion-paper.aspx Accessed 23 July 2012.

Committee of Vice-Chancellors and Principals (1988). The British Ph.D. London: CVCP. Costley, C (2011). Professional Doctorates and the Doctorate of Professional Studies. In Tony Fell,

Kevin Flint and Ian Haines (Eds.), Professional Doctorates in the UK 2011, Lichfield: UK Council for Graduate Education, pp. 20–27.

Cyr, T. and Muth, R. (2006). Portfolios in Doctoral Education. In Peggy L. Maki and Nancy A. Borkowski (Eds.), The Assessment of Doctoral Education: Emerging Criteria and New Models for Improving Outcomes. Sterling, Virginia: Stylus pp. 215–237.

Denicolo, P., Boulter, C. and Fuller, M. (2000). Sharing reflections on the experience of doctoral assessment: the voices of supervisors and examiners. Paper presented to the British Educational Research Association National Event, Reading, 2nd June, 2000.

Denicolo, P., Fuller, M. and Berry, D., with Raven, C. (2010). A Review of Graduate Schools in the UK. Lichfield: UKCGE, pp. 32–33.

Page 13: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

DEVELOPMENTS IN DOCTORAL ASSESSMENT IN THE UK

35

Green, H. and Powell, S. (2005). Doctoral Study in Contemporary Higher Education. Maidenhead: Society for Research into Higher Education and Open University Press.

Hall, F.L. (2006). Canadian practices related to the examination of Ph.D. theses. Web-published article by the University of Calgary, Canada: http://www.cags.ca/media/docs/cags-publication/PhD_ Thesis_Examination.pdf

Higher Education Funding Council for England (2003a). Improving standards in postgraduate research degree programmes: Informal consultation. Bristol: HEFCE 2003/01.

Higher Education Funding Council for England (2003b). Improving standards in postgraduate research degree programmes: Formal consultation. Bristol: HEFCE 2003/23.

Jackson, C. and Tinkler, P. (2001). Back to Basics: A consideration of the purposes of the Ph.D. viva. Assessment and Evaluation in Higher Education, 26(4), 355–366.

Johnston, S. (1997). Examining the examiners: An analysis of examiners’ reports on doctoral theses. Studies in Higher Education, 22(3), 333–347.

Kemp, N., Archer, W., Gilligan, C. and Humfrey, C. (2008). The UK’s Competitive Advantage: The Market for International Research Students. London: Universities UK, UK Higher Education International Unit.

Lovitts, B. E. (2006). Making the Implicit Explicit: Faculty’s Performance Expectations for the Dissertation. In Peggy L. Maki and Nancy A. Borkowski (Eds.), The Assessment of Doctoral Education: Emerging Criteria and New Models for Improving Outcomes Sterling, Virginia: Stylus pp. 163–187.

Lovitts, B. E. (2007). Making the Implicit Explicit: creating performance expectations for the dissertation. Sterling, Virginia: Stylus.

Metcalfe, J., Thompson, Q., Green, H. (2002). Improving Standards in postgraduate research degree programmes: A report to the Higher Education Funding Councils of England, Scotland and Wales. Bristol: Higher Education Funding Council for England (HEFCE).

Morley, L., Leonard, D., and David, M. (2002). Variations in Vivas: quality and equality in British Ph.D. assessments. Studies in Higher Education, 27(3), 263–273.

Morley, L. (2004). Interrogating doctoral assessment. International Journal of Educational Research, 41(2), 91–97.

Quality Assurance Agency (2001). The framework for higher education qualifications in Scotland. Glasgow: QAA, Qualification descriptor for doctoral degrees.

Quality Assurance Agency (2004). Code of practice for the assurance of academic quality and standards in higher education; Section 1: Postgraduate research programmes [Online]. Gloucester: QAA, precepts 22–24. http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/Code-of-practice-section-1.aspx (accessed 23 July 2012).

Quality Assurance Agency (2004). Code of practice for the assurance of academic quality and standards in higher education; Section 1: Postgraduate research programmes [Online]. Gloucester: QAA, precepts 15–17. http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/Code-of-practice-section-1.aspx (accessed 23 July 2012).

Quality Assurance Agency (2007a). Discussion paper about doctoral programmes. Published electronically until July 2011 by QAA on their pages on ‘The doctoral qualification’ at http:// www.qaa.ac.uk/AssuringStandardsAndQuality/Qualifications/doctoral/Pages/discussion-paper.aspx (accessed 23 July 2012).

Quality Assurance Agency (2007b). Report on the review of research degree programmes: England and Northern Ireland: Sharing good practice. Gloucester, QAA. Paragraphs 84–93.

Quality Assurance Agency (2007b). Report on the review of research degree programmes: England and Northern Ireland: Sharing good practice. Gloucester: QAA. Paragraphs 46–53.

Quality Assurance Agency (2008). The framework for higher education qualifications in England, Wales and Northern Ireland. Gloucester: QAA, 23–25 and 39.

Quality Assurance Agency (2012). UK Quality Code for Higher Education, Part B: Assuring and enhancing academic quality, Chapter B11 Research degrees. [Online] Gloucester: QAA. http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/quality-code-B11.aspx Accessed 23 July 2012.

Page 14: Critical Issues in Higher Education || Developments in Doctoral Assessment in the UK

G. CLARKE

36

Quality Assurance Agency (2008). The framework for higher education qualifications in England,

Wales and Northern Ireland. Gloucester: QAA, 23–25 and 39. Quality Assurance Agency (2012). UK Quality Code for Higher Education, Part B: Assuring and

enhancing academic quality, Chapter B11: Research degrees [Online]. Gloucester: QAA. http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/quality-code-B11.aspx (accessed 23 July 2012).