60
0 Care Quality Commission (CQC) Technical details patient survey information 2015 Inpatient survey June 2016 Contents 1. Introduction ........................................................................................................... 1 2. Selecting data for the reporting.............................................................................. 1 3. The CQC organisation search tool ........................................................................ 2 4. The trust benchmark reports ................................................................................. 2 5. Interpreting the data .............................................................................................. 2 6. Further information ................................................................................................ 4 Appendix A: Scoring for the 2015 Inpatients survey results ....................................... 5 Appendix B: Calculating the trust score and category.............................................. 18 Appendix C: Calculation of standard errors ............................................................. 26 UK Data Archive Study Number 8062 - Acute Trusts: Adult Inpatients Survey, 2015

Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

0

Care Quality Commission (CQC)

Technical details – patient survey information

2015 Inpatient survey June 2016

Contents 1. Introduction ........................................................................................................... 1 2. Selecting data for the reporting.............................................................................. 1 3. The CQC organisation search tool ........................................................................ 2 4. The trust benchmark reports ................................................................................. 2 5. Interpreting the data .............................................................................................. 2 6. Further information ................................................................................................ 4

Appendix A: Scoring for the 2015 Inpatients survey results ....................................... 5 Appendix B: Calculating the trust score and category .............................................. 18 Appendix C: Calculation of standard errors ............................................................. 26

UK Data Archive Study Number 8062 - Acute Trusts: Adult Inpatients Survey, 2015

Page 2: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

1

5B1. Introduction This document outlines the methods used by the Care Quality Commission to score and analyse trust level results for the 2015 Inpatient Survey, as available on the Care Quality Commission website, and in the benchmark report for each trust. The survey results are available for each trust on the CQC website. The survey data is shown in a simplified way, identifying whether a trust performed ‘better’ or ‘worse’ or ‘about the same’ as the majority of other trusts for each question. This analysis is done using a statistic called the ‘expected range’ (see section 5.3). On publication of the survey, an A-to-Z list of trust names will be available at the link below, containing further links to the survey data for all NHS trusts that took part in the survey: www.cqc.org.uk/inpatientsurvey A benchmark report is also available for each trust. Results displayed in the benchmark report are a graphical representation of the results displayed for the public on the CQC website (see further information section). These have been provided to all trusts and will be available on the survey co-ordination centre website at: www.nhssurveys.org.H The tables in the back of each benchmark report also highlight any statistically significant changes in the trust score between 2015 and 2014. The CQC webpage also contains national results for England, comparing against results for previous surveys.

6B2. Selecting data for the reporting Scores are assigned to responses to questions that are of an evaluative nature: in other words, those questions where results can be used to assess the performance of a trust (see section 5 “Scoring individual questions” for more detail). Questions that are not presented in this way tend to be those included solely for ‘filtering’ respondents past any questions that may not be relevant to them (such as: ‘Did you have an operation or procedure?’) or those used for descriptive or information purposes. The scores for each question are grouped on the website and in the benchmark reports according to the sections of the questionnaire as completed by respondents. For example, the Inpatients survey includes sections on ‘the accident and emergency department’, ‘the hospital and ward’ and ‘care and treatment’ amongst others. The average score for each trust, for each section, was calculated and will be presented on the website and in the benchmark report for each trust. Alongside both the question and the section scores on the website are one of three statements:

Better About the same Worse

This analysis is done using a statistic called the ‘expected range’ (see section 5.3)

Page 3: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

2

7B3. The CQC organisation search tool The organisation search tool contains information from various areas within the Care Quality Commission’s functions. The presentation of the survey data was designed using feedback from people who use the data, so that as well as meeting their needs, it presents the groupings of the trust results in a simple and fair way, to show where we are more confident that a trust’s score is ‘better’ or ‘worse’ than we’d expect, when compared with most other trusts. The survey data can be found from the A to Z link available at: www.cqc.org.uk/inpatientsurvey Or by searching for a hospital from the CQC home page, then clicking on ‘Patient survey information’ on the right hand side then clicking ‘latest patient survey results’.

8B4. Trust benchmark reports Benchmark reports should be used by NHS trusts to identify how they are performing in relation to all other trusts that took part in the survey. They also show if a score has significantly increased or decreased compared with the last survey. From this, areas for improvement can be identified. The graphs included in the reports display the scores for a trust, compared with the full range of results from all other trusts that took part in the survey. Each bar represents the range of results for each question across all trusts that took part in the survey. In the graphs, the bar is divided into three sections:

If a trust score lies in the orange section of the graph, the trust result is ‘about the same’ as most other trusts in the survey

If a trust scores lies in the red section of the graph, the trust result is ‘worse’ than expected when compared with most other trusts in the survey.

If your score lies in the green section of the graph, the trust result is ‘better’ than expected when compared with most other trusts in the survey

A black diamond represents the score for this trust. The black diamond (score) is not shown for questions answered by fewer than 30 people because the uncertainty around the result would be too great. 9B5. Interpreting the data 10B5.1 Scoring Questions are scored on a scale from 0 to 10. Details of the scoring for this survey are available in Appendix A at the end of this document. The scores represent the extent to which the patient’s experience could be improved. A score of 0 is assigned to all responses that reflect considerable scope for improvement, whereas a response that was assigned a score of 10 referrs to the most positive patient experience possible. Where a number of options lay between the negative and positive responses, they are placed at equal intervals along the scale. Where options were provided that did not have any bearing on the trust’s performance in terms of patient experience, the responses are classified as “not

Page 4: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

3

applicable” and a score is not given. Where respondents stated they could not remember or did not know the answer to a question, a score is not given.

11B5.2 Standardisation Results are based on ‘standardised’ data. We know that the views of a respondent can reflect not only their experience of NHS services, but can also relate to certain demographic characteristics, such as their age and sex. For example, older respondents tend to report more positive experiences than younger respondents, and women tend to report less positive experiences than men. Because the mix of patients varies across trusts (for example, one trust may serve a considerably older population than another), this could potentially lead to the results for a trust appearing better or worse than they would if they had a slightly different profile of patients. To account for this we ‘standardise’ the data. Standardising data adjusts for these differences and enables the results for trusts to be compared more fairly than could be achieved using non-standardised data. The inpatients survey is standardised by: age, gender and method of admission (emergency or elective). 12B5.3 Expected range The better / about the same / worse categories are based on the 'expected range’ that is calculated for each question for each trust. This is the range within which we would expect a particular trust to score if it performed about the same as most other trusts in the survey. The range takes into account the number of respondents from each trust as well as the scores for all other trusts, and allows us to identify which scores we can confidently say are 'better' or 'worse' than the majority of other trusts (see Appendix B for more details). Analysing the survey information in such a way allows for fairer conclusions to be made in terms of each trust’s performance. This approach presents the findings in a way that takes account of all necessary factors, yet is presented in a simple manner. As the ‘expected range’ calculation takes into account the number of respondents at each trust who answer a question, it is not necessary to present confidence intervals around each score for the purposes of comparing across all trusts. 13B5.4 Comparing scores across or within trusts, or across survey years The expected range statistic is used to arrive at a judgement of how a trust is performing compared with all other trusts that took part in the survey. However, if you want to use the scored data in another way, to compare scores (either as trend data for an individual trust or between different trusts) you will need to undertake an appropriate statistical test to ensure that any changes are ‘statistically significant’. ‘Statistically significant’ means that you can be very confident that any change between scores is real and not due to chance. The benchmark report for each trust includes a comparison to the 2014 survey scores and indicates whether the change is statistically significant. However, to compare back to earlier surveys (where possible) you would need to undertake a similar significance test.

Page 5: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

4

4B

5.5 Conclusions made on performance It should be noted that the data only show performance relative to other trusts: we have not set out absolute thresholds for ‘good’ or ‘bad’ performance. Thus, a trust may score lowly relative to others on a certain question whilst still performing very well on the whole. This is particularly true on questions where the majority of trusts score very highly. The better / worse categories are intended to help trusts identify areas of good or poor performance. However, when looking at scores within a trust over time, it is important to be aware that they are relative to the performance of other trusts. If, for example, a trust was ‘better’ for one question, then ‘about the same’ the following year, it may not indicate an actual decrease in the performance of the trust, but instead may be due to an improvement in many other trusts’ scores, leaving the trust to appear more ‘average’. Hence it is more accurate to look at actual changes in scores and to test for statistically significant differences. 15B6. Further information The full national results are on the CQC website, together with an A to Z list to view the results for each trust (alongside the technical document outlining the methodology and the scoring applied to each question): http://www.cqc.org.uk/inpatientsurvey The results for the adult inpatient surveys from 2002 to 2014 can be found at: http://www.nhssurveys.org/surveys/425 Full details of the methodology of the survey can be found at: http://www.nhssurveys.org/surveys/833 More information on the programme of NHS patient surveys is available at: http://www.cqc.org.uk/content/surveys More information about how CQC monitors hospitals is available on the CQC website at: http://www.cqc.org.uk/content/monitoring-nhs-acute-hospitals

Page 6: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

5

16BAppendix A: Scoring for the 2015 Inpatients survey results The following describes the scoring system applied to the evaluative questions in the survey. Taking question 24 as an example (Figure A1), it asks respondents whether the doctor answered their questions in a way they could understand. The option of “No” was allocated a score of 0, as this suggests that the experiences of the patient need to be improved. A score of 10 was assigned to the option ‘Yes, always’, as it reflects the most positive patient experience. The remaining option, ‘Yes, sometimes’, was assigned a score of 5 as the patient did not always receive understandable answers. Hence it was placed on the midpoint of the scale. If the patient did not have any questions to ask, this was classified as a ‘not applicable' response. 19BFigure A1 Scoring example: 20BQuestion 24 (2015 Inpatient Survey)

Q24. When you had important questions to ask a doctor, did you get answers that you could understand?

Yes, always 10

Yes, sometimes 5

No 0

I had no need to ask Not applicable

Where a number of options lay between the negative and positive responses, they were placed at equal intervals along the scale. For example, question 17 asks respondents how clean the hospital room or ward they were in was, (Figure A2). The following response options were provided:

Very clean Fairly clean Not very clean Not at all clean

A score of 10 was assigned to the option ‘Very clean’, as this represents best outcome in terms of patient experience. A response that the room or ward was ‘not at all clean’ was given a score of 0. The remaining two answers were assigned a score that reflected their position in terms of quality of experience, spread evenly across the scale. Hence the option ‘fairly clean’ was assigned a score of 6.7, and ‘not very clean’ was given a score of 3.3. Figure A2 Scoring example: 21BQuestion 17 (2015 Inpatient Survey)

Q17. In your opinion, how clean was the hospital room or ward that you were in?

Very clean 10 Fairly clean 6.7 Not very clean 3.3 Not at all clean 0

Details of the method used to calculate the scores for each trust, for individual questions and each section of the questionnaire, are available in Appendix B. This also includes an explanation of the technique used to identify scores that are better, worse or about the same as most other trusts.

Page 7: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

6

All analysis is carried out on a ‘cleaned’ data set. ‘Cleaning’ refers to the editing process that is undertaken on the survey data. A document describing this can be found at: www.nhssurveys.org/survey/1644 As part of the cleaning process, responses are removed from any trust that has fewer than 30 respondents to a question. This is because the uncertainty around the result is too high, and very low numbers would risk respondents being recognised from their responses. However, please note that when scoring the data, there are exceptions to this rule for questions eleven and thirteen, and questions fifty-four and fifty-five. This is because these questions have composite scoring, meaning the results from two or more questions are used to create a single score. If a trust has fewer than thirty responses to a question used in composite scoring, that information is retained during the calculation of the composite score, to enable fairer scoring. For example, Q11 and Q13 are scored together to provide a score based on if a respondent ever shared a sleeping area with patients of the opposite sex. If a respondent answered ‘yes’ to either Q11 or Q13 a trust will receive a score of 0. The scoring rules for Q11 and Q13 state that if either Q11 or Q13 is missing, the other is used for the scoring. If fifty respondents answered Q11, but only twenty of these said they were moved to a different ward (at Q12), this means that only twenty respondents answer Q13 which asks if they had to share a sleeping area with patients of the opposite sex after being moved to another ward. Following the cleaning rules, these responses would be cleaned out due to being less than thirty. However, fifteen of these respondents may have said that after moving wards they did share a sleeping area with patients of the opposite sex. If these responses had been cleaned out, the trust would therefore have received a more positive score than they should have. For clarity, please note that in any instances of low numbers of respondents to questions included in composite scoring, these responses would be cleaned out in for all other outputs, so for example they do not contribute the national results, nor are they included in the anonymised data set submitted to the UK Data Archive. The below details the scoring allocated to each scorable question. Section 1: The Accident and Emergency Department (A&E)

3. While you were in the A&E Department, how much information about your condition or treatment was given to you?

Not enough 5 Right amount 10 Too much 5 I was not given any information about my condition or treatment 0 Don’t know / Can’t remember Not applicable Answered by those who went to the A&E department

Page 8: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

7

4. Were you given enough privacy when being examined or treated in the A&E Department?

Yes, definitely 10 Yes, to some extent 5 No 0 Don’t know / Can’t remember Not applicable Answered by those who went to the A&E department

Section 2: Waiting lists and planned admissions

6. How do you feel about the length of time you were on the waiting list before your admission to hospital?

I was admitted as soon as I thought was necessary 10 I should have been admitted a bit sooner 5 I should have been admitted a lot sooner 0 Answered by those who had a planned admission

7. Was your admission date changed by the hospital?

No 10 Yes, once 6.7 Yes, 2 or 3 times 3.3 Yes, 4 times or more 0 Answered by those who had a planned admission

8. In your opinion, had the specialist you saw in hospital been given all of the necessary information about your condition or illness from the person who referred you?

Yes, definitely 10 Yes, to some extent 5 No 0 Don’t know / can’t remember Not applicable Answered by those who had a planned admission

Section 3: All types of admission

9. From the time you arrived at the hospital, did you feel that you had to wait a long time to get to a bed on a ward?

Yes, definitely 0 Yes, to some extent 5 No 10 Answered by all

Page 9: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

8

Section 4: The hospital and ward

11. When you were first admitted to a bed on a ward, did you share a sleeping area, for example a room or bay, with patients of the opposite sex? AND 13. After you moved to another ward (or wards), did you ever share a sleeping area, for example a room or bay, with patients of the opposite sex?

Yes 0 No 10 Filtered to exclude respondents who said that they stayed in a critical care area at Q10 as the majority of patients in these areas are exempt from the mixed sex accommodation guidelines due to the necessity for clinical needs to be prioritised. Q11 and Q13 are scored together to provide a single score on whether patients who have not stayed in a critical care area have ever shared a sleeping area with members of the opposite sex. Q11 and Q13 are not scored if option 1 (“Yes”) is selected to Q10. Q11 and Q13 score “10” if the respondent did not ever share a sleeping area with patients of the opposite sex, i.e. selected option 2 (“No”) to Q11 AND option 2 (“No”) to Q13. If option 1 (“Yes”) is selected for EITHER Q11 or Q13 then a score of “0” is assigned. If ONE of Q11 & Q13 is missing, the other is used for scoring. The two trusts providing services for women only are excluded from this question If a trust has less than 30 respondents to Q13, responses are not cleaned out to enable fairer scoring.

14. While staying in hospital, did you ever use the same bathroom or shower area as patients of the opposite sex?

Yes 0 Yes, because it had special bathing equipment that I needed 10 No 10 I did not use a bathroom or shower Not applicable Don’t know / Can’t remember Not applicable Answered by all Note: the two trusts providing services for women only are excluded from this question

15. Were you ever bothered by noise at night from other patients?

Yes 0 No 10 Answered by all

Page 10: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

9

16. Were you ever bothered by noise at night from hospital staff?

Yes 0 No 10 Answered by all

17. In your opinion, how clean was the hospital room or ward that you were in?

Very clean 10 Fairly clean 6.7 Not very clean 3.3 Not at all clean 0 Answered by all

18. How clean were the toilets and bathrooms that you used in hospital?

Very clean 10 Fairly clean 6.7 Not very clean 3.3 Not at all clean 0 I did not use a toilet or bathroom Not applicable Answered by all

19. Did you feel threatened during your stay in hospital by other patients or visitors?

Yes 0 No 10 Answered by all

20. Were hand-wash gels available for patients and visitors to use?

Yes 10 Yes, but they were empty 0 I did not see any hand-wash gels 0 Don’t know / Can’t remember Not applicable Answered by all

21. How would you rate the hospital food?

Very good 10 Good 6.7 Fair 3.3 Poor 0 I did not have any hospital food Not applicable Answered by all

Page 11: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

10

22. Were you offered a choice of food?

Yes always 10 Yes sometimes 5 No 0 Answered by all

23. Did you get enough help from staff to eat your meals?

Yes, always 10 Yes, sometimes 5 No 0 I did not need help to eat meals Not applicable Answered by all

Section 5: Doctors

24. When you had important questions to ask a doctor, did you get answers that you could understand?

Yes, always 10 Yes, sometimes 5 No 0 I had no need to ask Not applicable Answered by all

25. Did you have confidence and trust in the doctors treating you?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

26. Did doctors talk in front of you as if you weren’t there?

Yes, often 0 Yes, sometimes 5 No 10 Answered by all

Section 6: Nurses

27. When you had important questions to ask a nurse, did you get answers that you could understand?

Yes, always 10 Yes, sometimes 5 No 0 I had no need to ask Not applicable Answered by all

Page 12: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

11

28. Did you have confidence and trust in the nurses treating you?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

29. Did nurses talk in front of you as if you weren’t there?

Yes, often 0 Yes, sometimes 5 No 10 Answered by all

30. In your opinion, were there enough nurses on duty to care for you in hospital?

There were always or nearly always enough nurses 10 There were sometimes enough nurses 5 There were rarely or never enough nurses 0 Answered by all

Section 7: Care and Treatment

31. In your opinion, did the members of staff caring for you work well together?

Yes, always 10 Yes, sometimes 5 No 0 Don’t know/ Can’t remember Not applicable Answered by all

32. Sometimes in a hospital, a member of staff will say one thing and another will say something quite different. Did this happen to you?

Yes, often 0 Yes, sometimes 5 No 10 Answered by all

33. Were you involved as much as you wanted to be in decisions about your care and treatment?

Yes, definitely 10 Yes, to some extent 5 No 0 Answered by all

Page 13: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

12

34. Did you have confidence in decisions made about your condition or treatment?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

35. How much information about your condition or treatment was given to you?

Not enough 0 The right amount 10 Too much 0 Answered by all

36. Did you find someone on the hospital staff to talk to about your worries and fears?

Yes definitely 10 Yes, to some extent 5 No 0 I had no worries or fears Not applicable Answered by all

37. Do you feel you got enough emotional support from hospital staff during your stay?

Yes, always 10 Yes, sometimes 5 No 0 I did not need any emotional support Not applicable Answered by all

38. Were you given enough privacy when discussing your condition or treatment?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

39. Were you given enough privacy when being examined or treated?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

Page 14: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

13

41. Do you think the hospital staff did everything they could to help control your pain?

Yes, definitely 10 Yes, to some extent 5 No 0 Answered by those who said they were ever in any pain

42. How many minutes after you used the call button did it usually take before you got the help you needed?

0 minutes / right away 10 1-2 minutes 7.5 3-5 minutes 5.0 More than 5 minutes 2.5 I never got help when I used the call button 0 I never used the call button Not Applicable Answered by all

Section 8: Operations and Procedures

44. Beforehand, did a member of staff explain the risks and benefits of the operation or procedure in a way you could understand?

Yes, completely 10 Yes, to some extent 5 No 0 I did not want an explanation Not Applicable Answered by those who had an operation or procedure during their stay in hospital

45. Beforehand, did a member of staff explain what would be done during the operation or procedure?

Yes, completely 10 Yes, to some extent 5 No 0 I did not want an explanation Not Applicable Answered by those who had an operation or procedure during their stay in hospital

46. Beforehand, did a member of staff answer your questions about the operation or procedure in a way you could understand?

Yes, completely 10 Yes, to some extent 5 No 0 I did not have any questions Not Applicable Answered by those who had an operation or procedure during their stay in hospital

47. Beforehand, were you told how you could expect to feel after you had the operation or procedure?

Yes, completely 10 Yes, to some extent 5 No 0 Answered by those who had an operation or procedure during their stay in hospital

Page 15: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

14

49. Before the operation or procedure, did the anaesthetist or another member of staff explain how he or she would put you to sleep or control your pain in a way you could understand?

Yes, completely 10 Yes, to some extent 5 No 0 Answered by those who had an operation or procedure during their stay in hospital and were given an anaesthetic or medication to put them to sleep or control their pain

50. After the operation or procedure, did a member of staff explain how the operation or procedure had gone in a way you could understand?

Yes, completely 10 Yes, to some extent 5 No 0 Answered by those who had an operation or procedure during their stay in hospital

Section 9: Leaving Hospital

51. Did you feel you were involved in decisions about your discharge from hospital?

Yes definitely 10 Yes, to some extent 5 No 0 I did not want to be involved Not Applicable Answered by all

52. Were you given enough notice about when you were going to be discharged?

Yes, definitely 10 Yes, to some extent 5 No 0 Answered by all

53. On the day you left hospital, was your discharge delayed for any reason?

Yes 0 No 10 Answered by all

54. What was the MAIN reason for the delay? (Cross ONE box only)

I had to wait for medicines 0 I had to wait to see the doctor 0 I had to wait for an ambulance 0 Something else Not Applicable Answered by those who said that their discharge was delayed If response to Q53 is 2 (discharge WAS NOT delayed), Q54 is scored 10.

Page 16: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

15

If response to Q53 is 1 (discharge WAS delayed), and response to Q54 is 1, 2, 3 or 4, the scores above are assigned to Q54. If Q53 is missing, Q54 is not scored. If Q54 is missing, scoring is as per Q53. If a trust has fewer than 30 respondents to Q53, responses are not cleaned out to enable fairer scoring.

55. How long was the delay?

Up to 1 hour 7.5 Longer than 1 hour but no longer than 2 hours 5 Longer than 2 hours but no longer than 4 hours 2.5 Longer than 4 hours 0 Answered by those who said that their discharge was delayed

If response to Q54 is 4 (some other reason for the delay), Q55 is not scored. If response to Q53 is 2 (discharge WAS NOT delayed), Q55 is scored 10. If response to Q53 is 1 (discharge WAS delayed) AND the response to Q54 is 1, 2 or 3, the scores above are assigned to Q55. If response to Q53 is 1 (discharge WAS delayed) AND the response to Q54 is missing, the scores above are assigned to Q54. If response to Q53 is 1 (discharge WAS delayed) AND the response to Q55 is missing, Q55 is not scored. If response to Q53 is missing, Q55 is not scored If a trust has fewer than 30 respondents to Q54, responses are not cleaned out to enable fairer scoring.

57. After leaving hospital, did you get enough support from health or social care professionals to help you recover and manage your condition?

Yes, definitely 10 Yes, to some extent 5 No, but support would have been useful 0 Do, but I did not need any support Not Applicable Answered by those who said they went home or stayed with family or friends after leaving hospital.

58. When you transferred to another hospital or went to a nursing or residential home, was there a plan in place for continuing your care?

Yes definitely 10 Yes, to some extent 5 No 0 Don’t know/ Can’t say Not Applicable Answered by those who said they were transferred to another hospital, went to a residential nursing home or went somewhere else. Q58: This question does not contribute to the Section score for ‘Leaving hospital’ (Section 9), though is displayed for trusts where 30 or more respondents answered this question. In the instances where 30 or more respondents answered this question, the question score is displayed for the trust. If the row for Q58 is blank, this means that less than 30 responses were received for this question.

Page 17: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

16

59. Before you left hospital, were you given any written or printed information about what you should or should not do after leaving hospital?

Yes 10 No 0 Answered by all

60. Did a member of staff explain the purpose of the medicines you were to take at home in a way you could understand?

Yes, completely 10 Yes, to some extent 5 No 0 I did not need an explanation Not Applicable I had no medicines Not Applicable Answered by all

61. Did a member of staff tell you about medication side effects to watch for when you went home?

Yes, completely 10 Yes, to some extent 5 No 0 I did not need an explanation Not Applicable Answered by those who were prescribed medication to take home

62. Were you told how to take your medication in a way you could understand?

Yes, definitely 10 Yes, to some extent 5 No 0 I did not need to be told how to take my medication Not Applicable Answered by those who were prescribed medication to take home

63. Were you given clear written or printed information about your medicines?

Yes, completely 10 Yes, to some extent 5 No 0 I did not need this Not Applicable Don’t know / Can’t remember Not Applicable Answered by those who were prescribed medication to take home

64. Did a member of staff tell you about any danger signals you should watch for after you went home?

Yes, completely 10 Yes, to some extent 5 No 0 It was not necessary Not Applicable Answered by all

Page 18: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

17

65. Did hospital staff take your family or home situation into account when planning your discharge?

Yes, completely 10 Yes, to some extent 5 No 0 It was not necessary Not Applicable Don’t know / Can’t remember Not Applicable Answered by all

66. Did the doctors or nurses give your family or someone close to you all the information they needed to help care for you?

Yes, definitely 10 Yes, to some extent 5 No 0 No family or friends were involved Not Applicable My family or friends did not want or need information Not Applicable Answered by all

67. Did hospital staff tell you who to contact if you were worried about your condition or treatment after you left hospital?

Yes 10 No 0 Don’t know / Can’t remember Not Applicable Answered by all

68. Did hospital staff discuss with you whether you would need any additional equipment in your home, or any adaptations made to your home, after leaving hospital?

Yes 10 No, but I would have liked them to 0 No, it was not necessary to discuss it Not Applicable Answered by all

69. Did hospital staff discuss with you whether you may need any further health or social care services after leaving hospital? (e.g. services from a GP, physiotherapist or community nurse, or assistance from social service or the voluntary sector)

Yes 10

No, but I would have liked them to 0 No, it was not necessary to discuss it Not Applicable Answered by all

Section 10: Overall

70. Overall, did you feel you were treated with respect and dignity while you were in the hospital?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

Page 19: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

18

71. During your time in hospital did you feel well looked after by hospital staff?

Yes, always 10 Yes, sometimes 5 No 0 Answered by all

72. Overall…

I had a very poor experience 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 I had a very good experience 10 Answered by all

73. During your hospital stay, were you ever asked to give your views on the quality of your care?

Yes 10 No 0 Don’t know / Can’t remember Not Applicable Answered by all

74. Did you see, or were you given, any information explaining how to complain about the care you received?

Yes 10 No 0 Not sure / Don’t know Not Applicable Answered by all

Page 20: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

19

17BAppendix B: Calculating the trust score and category

Calculating trust scores The scores for each question and section in each trust were calculated using the method described below. Weights were calculated to adjust for any variation between trusts that resulted from differences in the age, sex and method of admission (planned or elective) of respondents. A weight was calculated for each respondent by dividing the national proportion of respondents in their age/sex/admission type group by the corresponding trust proportion. The reason for weighting the data was that younger people and women tend to be more critical in their responses than older people and men. If a trust had a large population of young people or women, their performance might be judged more negatively than if there was a more consistent distribution of age and sex of respondents. Weighting survey responses The first stage of the analysis involved calculating national age/ sex/ admission method proportions. It must be noted that the term “national proportion” is used loosely here as it was obtained from pooling the survey data from all trusts, and was therefore based on the respondent population rather than the entire population of England. All respondents at both Birmingham and Liverpool Women’s NHS Foundation Trusts are coded as ‘female’, even where self-reported gender is coded as male. These trusts are then weighted using the national all female population as a reference. The questionnaire asked respondents to state their year of birth. The approximate age of each patient was then calculated by subtracting the figure given from 2015. The respondents were then grouped according to the categories shown in Figure B1. If a patient did not fill in their year of birth or sex on the questionnaire, this information was inputted from the sample file. If information on a respondent’s age and/or sex was missing from both the questionnaire and the sample file, the patient was excluded from the analysis. Question 1 asked “Was your most recent hospital stay planned in advance or an emergency?” Respondents that ticked “emergency or urgent” were classed as emergency patients for the purpose of the weightings. Those who ticked “waiting list or planned in advance” were classed as elective patients. However, if respondents ticked “something else” or did not answer question 1, information was taken from other responses to the questionnaire to determine the method of admission. Emergency admission:

If the respondent answered "emergency or urgent" at question 1. Or

If the respondent answered “something else” or did not respond to question 1, and answered ‘yes’ to question 2.

Or If the respondent answered “something else” or did not respond to question 1,

did not answer question 2, but responded to one or more of questions 3 or 4.

Page 21: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20

Elective admission:

If the respondent answered "waiting list or planned in advance" at question 1. Or

If the respondent answered “something else” or did not respond to question 1, and answered ‘no’ to question 2.

Or If the respondent answered "something else" or did not respond to question 1,

did not answer questions 2, 3 and 4 and gave at least one response to questions 5, 6, 7 and 8.

All other combinations of responses for questions 1 to 8 resulted in the respondent being excluded from the analysis, as it was not possible to determine admission method. The national age/sex/admission method proportions relate to the proportion of men, and women of different age groups who had emergency or elective admission. As shown in Figure B1, the proportion of respondents who were male, admitted as emergencies, and aged 51 to 65 years is 0.068; the proportion who were women, admitted as emergencies, and aged 51 to 65 years is 0.064 etc. Figure B1 National Proportions

Admission

Method

Sex Age Group National

proportion 2015

Emergency

Men

≤35 0.013

36-50 0.025

51-65 0.068

66+ 0.188

Women

≤35 0.027

36-50 0.034

51-65 0.064

66+ 0.197

Elective

Men

≤35 0.007

36-50 0.014

51-65 0.047

66+ 0.109

Women

≤35 0.013

36-50 0.031

51-65 0.060

66+ 0.103

Note: All proportions are given to three decimals places for this example. The analysis included these figures to nine decimal places, and can be provided on request from the CQC surveys team at [email protected].

These proportions were calculated for each trust, using the same procedure. The next step was to calculate the weighting for each individual. Age/sex/admission type weightings were calculated for each respondent by dividing the national proportion of respondents in their age/sex/admission type group by the corresponding trust proportion.

Page 22: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

21

If, for example, a lower proportion of men who were admitted as emergencies aged between 51 and 65 years within Trust A responded to the survey, in comparison with the national proportion, then this group would be under-represented in the final scores. Dividing the national proportion by the trust proportion results in a weighting greater than “1” for members of this group (Figure B2). This increases the influence of responses made by respondents within that group in the final score, thus counteracting the low representation. Figure B2 Proportion and Weighting for Trust A

Sex Admission Age Group National Proportion

Trust A Proportion

Trust A Weight (National/Trust A)

Men Emergency ≤35 0.013 0.018 0.722

36-50 0.025 0.035 0.714

51-65 0.068 0.047 1.447

66+ 0.188 0.095 1.979

Women Emergency ≤35 0.027 0.045 0.600

36-50 0.034 0.057 0.594

51-65 0.064 0.085 0.753

66+ 0.197 0.117 1.684

Men Elective ≤35 0.007 0.018 0.389

36-50 0.014 0.035 0.400

51-65 0.047 0.047 1.000

66+ 0.109 0.095 1.147

Women Elective ≤35 0.013 0.045 0.289

36-50 0.031 0.057 0.544

51-65 0.060 0.085 0.705

66+ 0.103 0.119 0.866 Note: All proportions are given to three decimals places for this example. The analysis included these figures to nine decimal places, and can be provided on request from the CQC surveys team at [email protected].

Likewise, if a considerably higher proportion of women admitted as emergency patients aged between 36 and 50 years from Trust B responded to the survey (Figure B3), then this group would be over-represented within the sample, compared with national representation of this group. Subsequently this group would have a greater influence over the final score. To counteract this, dividing the national proportion by the proportion for Trust B results in a weighting of less than one for this group.

Page 23: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

22

Figure B3 Proportion and Weighting for Trust B

Sex Admission Age Group National Proportion

Trust B Proportion

Trust B Weight (National/Trust B)

Men Emergency ≤35 0.013 0.016 0.813

36-50 0.025 0.029 0.862

51-65 0.068 0.062 1.097

66+ 0.188 0.091 2.066

Women Emergency ≤35 0.027 0.034 0.794

36-50 0.034 0.075 0.453

51-65 0.064 0.080 0.800

66+ 0.197 0.110 1.791

Men Elective ≤35 0.007 0.016 0.438

36-50 0.014 0.029 0.483

51-65 0.047 0.062 0.758

66+ 0.109 0.097 1.124

Women Elective ≤35 0.013 0.034 0.382

36-50 0.031 0.075 0.413

51-65 0.060 0.080 0.750

66+ 0.103 0.110 0.936 Note: All proportions are given to three decimals places for this example. The analysis included these figures to nine decimal places, and can be provided on request from the CQC surveys team at [email protected].

To prevent the possibility of excessive weight being given to respondents in an extremely underrepresented group, the maximum value for any weight was set at five. Calculating question scores The trust score for each question displayed on the website was calculated by applying the weighting for each respondent to the scores allocated to each response. The responses given by each respondent were entered into a dataset using the 0-10 scale described in section 3. Each row corresponded to an individual respondent, and each column related to a survey question. For those questions that the respondent did not answer (or received a “not applicable” score for), the relevant cell remained empty. Alongside these were the weightings allocated to each respondent (Figure B6). Figure B4 Scoring for the ‘A&E Department’ section, 2015 Inpatients survey, Trust B

0BRespondent Scores

Weight Q3 Q4

1 10 0 0.813

2 5 10 1.124

3 . 5 0.750

Respondents’ scores for each question were then multiplied individually by the relevant weighting, in order to obtain the numerators for the trust scores (Figure B5).

Page 24: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

23

Figure B5 Numerators for the ‘A&E’ section, 2015 Inpatients survey, Trust B

1BRespondent Scores

Weight Q3 Q4

1 8.130 0.000 0.813

2 5.620 11.124 1.124

3 3.750 0.750 Obtaining the denominators for each domain score A second dataset was then created. This contained a column for each question, grouped into domains, and again with each row corresponding to an individual respondent. A value of one was entered for the questions where a response had been given by the respondent, and all questions that had been left unanswered or allocated a scoring of “not applicable” were set to missing (Figure B8). Figure B6 Values for non-missing responses, ‘A&E’ section, 2015 Inpatients survey, Trust B

2BRespondent Scores

Weight Q3 Q4

1 1 1 0.813

2 1 1 1.124

3 1 0.750 The denominators were calculated by multiplying each of the cells within the second dataset by the weighting allocated to each respondent. This resulted in a figure for each question that the respondent had answered (Figure B9). Again, the cells relating to the questions that the respondent did not answer (or received a ’not applicable' score for) remained set to missing (Figure B8). Figure B7 Denominators for the “A&E” section, 2015 Inpatients survey, Trust B

3BRespondent Score

Weight Q3 Q4

1 0.813 0.813 0.813

2 1.124 1.124 1.124

3 0.750 0.750 The weighted mean score for each trust, for each question, was calculated by dividing the sum of the weighted scores for a question (i.e. numerators), by the weighted sum of all eligible respondents to the question (i.e. denominators) for each trust.

Page 25: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

24

Using the example data for Trust B, we first calculated weighted mean scores for each of the three questions that contributed to the ‘A&E’ section of the questionnaire. Q3: 8.130 + 5.620 = 7.099 0.813 + 1.124 Q4: 0.000 + 11.124 + 3.750 = 5.536 0.813 + 1.124 + 0.750

Calculating section scores A simple arithmetic mean of each trust’s question scores was then taken to give the score for each section. Continuing the example from above, then, Trust B’s score for the ’Accident & Emergency' section of the Inpatients survey would be calculated as: (7.099 + 5.536) / 2 = 6.318

4BCalculation of the expected ranges Z statistics (or Z scores) are standardized scores derived from normally distributed data, where the value of the Z score translates directly to a p-value. That p-value then translates to what level of confidence you have in saying that a value is significantly different from the mean of your data (or your ‘target’ value). A standard Z score for a given item is calculated as:

i

ii

s

yz 0 (1)

where: si is the standard error of the trust score F

1F,

yi is the trust score

0 is the mean score for all trusts Under this banding scheme, a trust with a Z score of < -1.96 is labeled as “Worse” (significantly below average; p<0.025 that the trust score is below the national average), -1.96 < Z < 1.96 as “About the same”, and Z > 1.96 as “Better” (significantly above average; p<0.025 that the trust score is above the national average) than what would be expected based on the national distribution of trust scores. However, for measures where there is a high level of precision (the survey indicators sample sizes average around 400 to 500 per trust) in the estimates, the standard Z score may give a disproportionately high number of trusts in the significantly above/ below average bands (because si is generally so small). This is compounded by the fact that all the factors that may affect a trust’s score cannot be controlled. For example, if trust scores are closely related to economic deprivation then there may be significant variation between trusts due to this factor, not necessarily due to factors within the trusts’ control. In this situation, the data are said to be ‘over dispersed’. That problem can be partially overcome by the use of an ‘additive random effects model’ to calculate the Z score (we refer to this modified Z score as the ZD

score). Under that model, we accept that there is natural variation between trust scores, and this variation is then taken into account by adding this to the trust’s local

1 Calculated using the method in Appendix C.

Page 26: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

25

standard error in the denominator of (1). In effect, rather than comparing each trust simply to one national target value, we are comparing them to a national distribution. The ZD score for each question and section was calculated as the trust score minus the national mean score, divided by the standard error of the trust score plus the variance of the scores between trusts. This method of calculating a ZD score differs from the standard method of calculating a Z score in that it recognizes that there is likely to be natural variation between trusts which one should expect, and accept. Rather than comparing each trust to one point only (i.e. the national mean score), it compares each trust to a distribution of acceptable scores. This is achieved by adding some of the variance of the scores between trusts to the denominator. The steps taken to calculate ZD

scores are outlined below.

Winsorising Z-scores The first step when calculating ZD

is to ‘Winsorise’ the standard Z scores (from (1)).

Winsorising consists of shrinking in the extreme Z-scores to some selected percentile, using the following method: 1. Rank cases according to their naive Z-scores. 2. Identify Zq and Z(1-q), the 100q% most extreme top and bottom naive Z-scores. For this work, we used a value of q=0.2

3. Set the lowest 100q% of Z-scores to Zq, and the highest 100q% of Z-scores to (1-

q). These are the Winsorised statistics.

This retains the same number of Z-scores but discounts the influence of outliers. Estimation of over-dispersion

An over dispersion factor is estimated for each indicator which allows us to say if

the data for that indicator are over dispersed or not:

I

iizI 1

21 (2)

where I is the sample size (number of trusts) and zi

is the Z score for the ith trust

given by (1). The Winsorised Z scores are used in estimating .

An additive random effects model

If I is greater than (I - 1) then we need to estimate the expected variance between

trusts. We take this as the standard deviation of the distribution of i (trust means) for

trusts, which are on target, we give this value the symbol , which is estimated using

the following formula:

i i ii ii www

II2

2 )1(ˆˆ

(3)

Page 27: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

26

where wi = 1 / si2 and is from (2). Once has been estimated, the ZD

score is

calculated as:

22

0

s

yz

i

iD

i (4)

Page 28: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

27

18BAppendix C: Calculation of standard errors 1. Calculation of standard errors

In order to calculate statistical bandings from the data, it is necessary for CQC to have both trusts’ scores for each question and section and the associated standard error. Since each section is based on an aggregation of question mean scores that are based on question responses, a standard error needs to be calculated using an appropriate methodology.

For the patient experience surveys, the z-scores are scores calculated for section and question scores, which combines relevant questions making up each section into one overall score, and uses the pooled variance of the question scores

Assumptions and notation The following notation will be used in formulae:

ijkX is the score for respondent j in trust i to question k

Q is the number of questions within section d

ijw is the standardization weight calculated for respondent j in trust i

ikY is the overall trust i score for question k

idY is the overall score for section d for trust i

Associated with the subject or respondent is a weight ijw corresponding to how well

the respondent’s age/sex is represented in the survey compared with the population of interest. Calculating mean scores Given the notation described above, it follows that the overall score for trust i on question k is given as:

j

ij

j

ijkij

ik

w

Xw

Y

The overall score for section d for trust i is then the average of the trust-level question means within section d. This is given as:

Q

Y

Y

Q

k

ikd

id

1

Calculating standard errors Standard errors are calculated for both sections and questions.

Page 29: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

28

The variance within trust i on question k is given by:

2

j

ij

j

ikijkij

ikw

YXw

This assumes independence between respondents.

For ease of calculation, and as the sample size is large, we have used the biased estimate for variance. The variance of the trust level average question score, is then given by:

2

22

2

ˆ

)(

j

ij

j

ijik

j

ij

j

ijkij

j

ij

j

ijkij

ikik

w

w

w

XwVar

w

Xw

VarYVarV

Covariances between pairs of questions (here, k and m) can be calculated in a similar way:

2

2

.

ˆ

),(.

j

ij

j

ijikm

imikimik

w

w

YYCovCOV

Where

j

ij

j

imijmikijkij

ikmw

YXYXw ))((

Note: ijw is set to zero in cases where patient j in trust i did not answer both

questions k and m. The trust level variance for the section score d for trust i is given by:

Page 30: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

29

Q

k

k

m

imik

Q

k

ikidid COVVQ

YVarV2

1

1

,

12

21

)(

The standard error of the section score is then:

idid VSE

Page 31: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

DATA CLEANING INSTRUCTION MANUAL FOR THE NHS ADULT INPATIENT SURVEY 2015

THE CO-ORDINATION CENTRE FOR THE NHS PATIENT SURVEY PROGRAMME

Last updated: 21 October 2015

Page 32: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

Contacts

The Co-ordination Centre for the NHS Patient Survey Programme Picker Institute Europe Buxton Court 3 West Way Oxford OX2 0BJ Tel: 01865 208127 Fax: 01865 208101

E-mail: [email protected] Website: www.nhssurveys.org

Authors The Patient Co-ordination Centre

Updates Before using this document, please check that you have the latest version, as small amendments are made from time to time (the date of the last update is on the front page). In the very unlikely event that there are any major changes, we will e-mail all trust contacts and contractors directly to inform them of the change.

This document is available from the Co-ordination Centre website at:

http://nhssurveys.org/survey/1644

Questions and comments If you have any questions or concerns regarding this document, or if you have any specific queries regarding the submission of data, please contact the Co-ordination Centre using the details provided at the top of this page.

Page 33: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

Contents

1 Introduction .................................................................................................................... 1

1.1 Scope of this cleaning guide ....................................................................................... 1

1.2 Definition of key terms ................................................................................................ 1

2 Entering and coding data prior to submission ................................................................ 3

3 Editing/cleaning data after submission........................................................................... 4

3.1 Approach and rationale ............................................................................................... 4

3.2 Dealing with filtered questions .................................................................................... 4

3.3 Dealing with multiple response questions ................................................................... 6

3.4 Dealing with demographics ....................................................................................... 11

3.5 Usability and eligibility ............................................................................................... 11

3.6 Missing responses .................................................................................................... 12

3.7 Non-specific responses ............................................................................................. 12

Appendix A: Example of cleaning ...................................................................................... 13

Appendix B: Non-specific responses ................................................................................. 15

List of Figures Figure 1 – Cleaning instructions for filtered questions .............................................................. 5 Figure 2 – Example ‘raw’/‘uncleaned’ data ................................................................................ 13 Figure 3 – Example cleaned data ............................................................................................... 14

Page 34: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 1

1 Introduction

At the end of fieldwork for the NHS Adult Inpatient Survey 2015, participating trusts and contractors

will be required to submit data to the Co-ordination Centre in a raw (‘uncleaned’) format. Once the

Co-ordination centre has received data from all participating trusts, the data must be cleaned. To

ensure that the cleaning process is comparable across NHS trusts, data for all trusts in the survey

are collated and cleaning is carried out on the full collated dataset.

This document provides a description and specification of the processes that will be used by the

Co-ordination Centre to clean and standardise data submitted by contractors and trusts as part of

the NHS Adult Inpatient Survey 2015. By following the instructions in this document, it should be

possible to recreate this cleaning process.

If you have any comments or queries regarding this document please contact the Co-ordination

Centre on 01865 208127, or e-mail us at [email protected].

1.1 Scope of this cleaning guide

For the 2015 inpatient survey, all trusts have to submit the full 82 questions to the Co-ordination

Centre; all cleaning undertaken by ourselves will include only this data.

1.2 Definition of key terms

Definitions of terms commonly used in this document, as they apply to the 2015 inpatient survey

are as follows:

Raw/uncleaned data: ‘Raw’ or ‘uncleaned’ data is data that has been entered verbatim from

completed questionnaires without any editing taking place to remove contradictory or inappropriate

responses; thus, all response boxes crossed on the questionnaire should be included in the data

entry spreadsheet1 (see Section 11 in the contractor instruction manual or Section 15 in the in-

house trust instruction manual for details on data entry2). The requirement for raw/uncleaned data

does not, however, preclude the checking of data for errors resulting from problems with data entry

or similar. Ensuring high data quality is paramount and errors resulting from data entry problems

can and should be corrected by checking against the appropriate completed questionnaire.

Data cleaning: The Co-ordination Centre uses the term ‘data cleaning’ to refer to all editing

processes undertaken upon survey data once the survey has been completed and the data has

been entered and collated.

Routing questions: These are items on the questionnaire which instruct respondents to either

continue on to the next question or to skip past irrelevant questions depending on their response to

1 Except where: a) multiple responses have been crossed - set these to missing (The exception to this is for

any questions which ask respondents to ‘Cross ALL boxes that apply’, such as Q76 and Q77 where respondents may select more than one response option) b) year of birth has been entered in an incorrect format - if the patient’s intended response is unambiguous from the questionnaire, then enter this. See Section 2 for more details on how data should be entered and coded.

2 The contractor instruction manual was sent directly to all approved contractors by the Survey Co-ordination

Centre. The in-house instruction manual can be found here: http://nhssurveys.org/survey/1600

Page 35: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 2

the routing question. For the 2015 inpatient survey, the routing questions in the questionnaire are

Q1, Q2, Q12, Q40, Q43, Q48, Q53, Q56, Q57, Q60 and Q76.

Filtered questions: Items on the questionnaire that are not intended to be answered by all

respondents are referred to as filtered questions. Whether individual respondents are expected to

answer filtered questions depends on their responses to preceding routing questions. For the

2015 inpatient survey, the filtered questions in the core questionnaire are Q2 – Q4, Q5 – Q8, Q13,

Q41, Q44 – Q501, Q54 – Q55, Q57 - Q58, Q61 – Q63 and Q77.

Non-filtered questions: These are items in the questionnaire which are not subject to any filtering

and which should therefore be answered by all respondents. For the 2015 national inpatient

survey, the non-filtered questions are Q1, Q9 – Q12, Q14 – Q40, Q42 – Q43, Q51 – Q53, Q56,

Q59 – Q60, Q64 – Q76 and Q78 – Q82.

Out-of-range data: This refers to instances where data within a variable has a value that is not

permissible. For categorical data – as in the case of the majority of variables in this survey – this

could be, for example, a value of ‘3’ being entered for a variable that has only two response

options (1 or 2). For scalar data – e.g. year of birth – data is considered to be out-of-range if it

specifies a value that is not possible (for instance, year of birth as 983 or 2983). Out-of-range

responses entered into the dataset should not be automatically (e.g. algorithmically) removed prior

to submitting the data to the Co-ordination Centre (see Section 2).

Non-specific responses: This is a loose term for response options that can be considered as not

being applicable to the respondent in terms of directly answering the specific question to which

they are linked. Most commonly, these are responses such as “Don’t know / can’t remember”,

which indicate a failure to recall the issue in question. Likewise, responses that indicate the

question is not applicable to the respondent are considered ‘non-specific’ – for example, responses

such as “I did not have any food” or “I did not use any bathrooms”. A full list of such responses for

the 2015 inpatient survey can be found in Appendix B.

1 The range Q44-Q50 includes an additional filter question within it - Q48.

Page 36: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 3

2 Entering and coding data prior to submission

For the 2015 inpatient survey, trusts and contractors are required to submit raw (‘uncleaned’) data

to the Co-ordination Centre. For clarification, raw data is created as follows:

i) All responses should be entered into the dataset, regardless of whether or not the

respondent was meant to respond to the question (e.g. where patients answer questions that

they have been directed to skip past, these responses should still be entered).

ii) Where a respondent has selected more than one response category on a question, this

question should be set to ‘missing’ for that person in the data (i.e. left blank, or coded as a

full stop (.)). The exception to this is for the ‘multiple response’ questions (e.g. Q76 and

Q77), where respondents may select more than one response option (see Section 3.3 below

for details about how to enter responses to these types of questions).

iii) Where a respondent has crossed out a response, this should not be entered in the data (the

response should be left blank, or coded as a full stop (.)). Where a respondent has crossed

out a response and instead selected a second response option, the second choice should be

entered into the data.

iv) Where a respondent has given their response inconsistently with the formatting of the

questionnaire but where their intended response is nonetheless unambiguous on inspection

of the completed questionnaire, then the respondent’s intended response should be entered.

For example, where a patient has written their date of birth in the boxes for Q79 (What was

your year of birth), but written their year of birth in at the side of this, then the respondent’s

year of birth should be entered.

v) For the year of birth question (Q79), unrealistic responses should still be entered except

following rule iv) above. For example, if a respondent enters ‘2015’ in the year of birth box,

this should still be entered unless the respondent has unambiguously indicated their actual

year of birth to the side.

vi) Once the data has been entered, no responses should be removed or changed in any way

except where responses are known to have been entered incorrectly or where inspection of

the questionnaire indicates that the patient’s intended response has not been captured. This

includes ‘out-of-range’ responses, which must not be automatically removed from the

dataset. Responses in the dataset should only be changed before submission to the Co-

ordination Centre where they are found to have been entered inconsistently with the

respondent’s intended response.

Page 37: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 4

3 Editing/cleaning data after submission

3.1 Approach and rationale

The aim of the Co-ordination Centre in cleaning the data submitted to us is to ensure an optimal

balance between data quality and completeness. Thus, we seek to remove responses that are

known to be erroneous or inappropriate, but do this in a relatively permissive way so as to enable

as many responses as possible to contribute to the overall survey results.

3.2 Dealing with filtered questions

Some of the questions included in the survey are only relevant to a subset of respondents, and in

these cases filter instructions are included in the questionnaire to route respondents past questions

that are not applicable to them. For example, people who select “No” to Q53 (“On the day you left

hospital, was your discharge delayed for any reason?”) are instructed to skip all further questions

on delayed discharge (e.g. Q54 and Q55).

It is necessary to clean the data to remove responses where filter instructions have been

incorrectly followed. In such cases, participants’ responses to questions that were not relevant to

them are deleted from the dataset. Responses are only deleted where respondents have

answered filtered questions despite selecting an earlier response on a routing question instructing

them to skip these questions. For example, if a respondent selects “No” to Q53 (i.e. their discharge

was not delayed), but then answers the two subsequent questions about delayed discharge.

Responses to filtered questions are not removed, however, where the response to the routing

question is missing. For example, Q44-Q50 are applicable to those who had an operation or

procedure and are filtered by the response to Q43 (e.g. they are answered if Q43=1 – the

respondent did have an operation or procedure). If a respondent does not answer Q43, or if the

response to Q43 is missing for any reason, then responses to Q44-Q50 should not be removed.

Figure 1, below, shows a summary of all routing questions, and the filtered questions they relate to,

that are included in the 2015 inpatient survey. Please note that these instructions should be

followed sequentially in order to be consistent with the procedures applied by the Co-ordination

Centre.

Page 38: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 5

Figure 1 – Cleaning instructions for filtered questions

ROUTING

QUESTION

RESPONSE

VALUES

FILTERED

QUESTIONS

if Q1 = 2 then delete responses to: Q2 – Q4

if Q2 = 1 then delete responses to: Q5 – Q8

if Q2 = 2 then delete responses to: Q3 – Q4

if Q12 = 1 OR 4 then delete responses to: Q13

if Q40 = 2 then delete responses to: Q41

if Q43 = 2 then delete responses to: Q44 – Q50

if Q48 = 2 then delete responses to: Q49

if Q53 = 2 then delete responses to: Q54 – Q55

if Q56 = 5 then delete responses to: Q57 – Q58

if Q56 = 3 OR 4 then delete responses to: Q57

if Q57 = 1,2,3, OR 4 then delete responses to: Q58

if Q60 = 5 then delete responses to: Q61 – Q63

if Q76 = 7

see specific instructions

below

Please note that the instructions in the above table should be followed sequentially in the order shown above.

Please note that Q1 (which asks respondents whether their hospital stay was planned in advance

or an emergency) should not be considered a routing question in the traditional sense. For

example, responses to Q5-Q8 (the questions on planned admissions) must not be automatically

removed if Q1=1 (i.e. the respondent indicates their hospital stay was an emergency or urgent).

This is due to the fact that although patients responding “emergency or urgent” to Q1 are

identifying themselves as emergency admissions, they may subsequently report not going to an

Emergency Department as part of their admission (i.e. Q2=2) and in such cases will be instructed

in the questionnaire to go to Q5. Thus not all respondents selecting 1 (‘Emergency or urgent’) for

Q1 will be expected to skip Q5-Q8.

A worked example of the cleaning process for removing unexpected responses to filtered

questions is included in Appendix A.

Page 39: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 6

3.3 Dealing with multiple response questions

For most questions, each column corresponds to one survey question. However, there are some

exceptions to this rule. For multiple response questions (Q76 and Q77) that give the instruction

‘Cross all that apply’, each response option is treated as a separate question.

Please note the cleaning process for these questions has been updated for the inpatient

2015 survey and is not the same as for previous patient surveys.

However, the last response to each of these questions is an exclusive option – respondents should

not have selected Q76_7 (‘I do not have a long-standing condition’) as well as any of Q76_1-6. If

this is the case, the cleaning of Q76 takes into account the response to Q77 when deciding which

options to retain, as detailed below.

- When a respondent has crossed any of options Q76_1-6 as well as Q76_7:

Example Q76. Do you have any of the following long-standing conditions? (Cross ALL boxes that apply)

1 Deafness or severe hearing impairment

2 Blindness or partially sighted

3 A long-standing physical condition

4 A learning disability

5 A mental health condition

6 A long-standing illness, such as cancer, HIV, diabetes, chronic heart disease or

epilepsy

7 No, I do not have a long-standing condition

Responses to each part of this question are coded: 1 if the box is crossed

0 if the box is not crossed1

Q76 takes up seven columns in the data file, labelled as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Codings for this example 1 0 0 0 1 0 0

Page 40: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 7

o If they have also crossed any of Q77_1-7 then Q76_7 is recoded to 0 (and options

question 76_1-6 remain as selected by the respondent), because their response to

question 77 suggests they do in fact have a long-standing condition.

o If they have not crossed any of Q77_1-7 and have not crossed Q77_8 then

Q76_1-6 are recoded to 0 (and option 76_7 remains crossed), because their lack of

response to question 77 suggests that they have correctly followed the routing from

Q76_7, and therefore do not have a long-standing condition.

o If they have not crossed any of Q77_1-7 and have crossed Q77_8 then Q76_1-7

and Q77_1-8 (i.e. all responses to Q76 & Q77) are removed (set to blank or full

stop), because it is unclear which options are most likely to be correct for the

respondent.

Similarly, a respondent should not have selected Q77_8 (‘No difficulty with any of these’) as well as

any of Q77_1-7, and in these cases the cleaning of Q77 takes into account responses to Q76, as

detailed below.

- When a respondent has crossed any of options Q77_1-7 as well as Q77_8:

o If they have also crossed any of Q76_1-6 then Q77_8 is recoded to 0 (and options

76_1-6 remain as selected by the respondent), because their response to question

76 indicates that they do have a long-standing condition and therefore their question

77 responses of difficulties caused are likely to be correct.

o If they have not crossed any of Q76_1-6 then all responses to Q77_1-8 are

removed (set to blank or full stop), as the respondent should not have completed

this filtered question.

After the above cleaning has taken place:

- If only Q76_7 is selected, any responses to Q77_1-8 are removed (set to blank or full stop),

as respondents who do not have a long-standing condition should not have answered Q77.

- If any of Q76_1-6 are selected and no response has been given to Q77, then Q77_1-8 are

set to missing (blank or a full stop), as Q77 should have been answered by all respondents

who selected at least one of Q76_1-6.

- If no responses have been given to Q76, Q76_1-7 are set to missing and Q77_1-8 are

removed (set to blank or full stop), as Q77 is a filtered question which should only have

been answered by respondents selecting at least one of Q76_1-6.

Several examples of the cleaning undertaken for questions 76 and 77 are provided below.

Page 41: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 8

Example 1

Q76. Do you have any of the following long-standing conditions? (Cross ALL boxes that apply) 1 Deafness or severe hearing impairment 2 Blindness or partially sighted 3 A long-standing physical condition 4 A learning disability 5 A mental health condition 6 A long-standing illness, such as cancer, HIV, diabetes, chronic heart disease, or epilepsy 7 No, I do not have a long-standing condition

Q77. Does this condition(s) cause you difficulty with any of the following? (Cross ALL boxes that

apply) 1 Everyday activities that people your age can usually do 2 At work, in education or training 3 Access to buildings, streets or vehicles 4 Reading or writing 5 People’s attitudes to you because of your condition 6 Communicating, mixing with others, or socialising 7 Any other activity 8 No difficulty with any of these

BEFORE CLEANING:

Q76 is coded as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Coding for this example 1 0 0 0 1 0 1

Q77 is coded as follows:

Column headings Q77_1 Q77_2 Q77_3 Q77_4 Q77_5 Q77_6 Q77_7 Q77_8

Coding for this example 0 1 0 0 0 1 0 1

AFTER CLEANING:

Q76 is coded as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Coding for this example 1 0 0 0 1 0 0

Q77 is coded as follows:

Column headings Q77_1 Q77_2 Q77_3 Q77_4 Q77_5 Q77_6 Q77_7 Q77_8

Coding for this example 0 1 0 0 0 1 0 0

When the data is cleaned, the responses to Q76_7 and Q77_8 are re-coded as 0 because the

respondent has also selected responses for some of both Q76_1-6 and Q77_1-7.

Page 42: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 9

Example 2

Q76. Do you have any of the following long-standing conditions? (Cross ALL boxes that apply) 1 Deafness or severe hearing impairment 2 Blindness or partially sighted 3 A long-standing physical condition 4 A learning disability 5 A mental health condition 6 A long-standing illness, such as cancer, HIV, diabetes, chronic heart disease, or epilepsy 7 No, I do not have a long-standing condition

Q77. Does this condition(s) cause you difficulty with any of the following? (Cross ALL boxes that

apply) 1 Everyday activities that people your age can usually do 2 At work, in education or training 3 Access to buildings, streets or vehicles 4 Reading or writing 5 People’s attitudes to you because of your condition 6 Communicating, mixing with others, or socialising 7 Any other activity 8 No difficulty with any of these

BEFORE CLEANING:

Q76 is coded as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Coding for this example 1 0 0 0 1 0 1

Q77 is coded as follows:

Column headings Q77_1 Q77_2 Q77_3 Q77_4 Q77_5 Q77_6 Q77_7 Q77_8

Coding for this example 0 0 0 0 0 0 0 1

AFTER CLEANING:

Q76 is coded as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Coding for this example . . . . . . .

Q77 is coded as follows:

Column headings Q77_1 Q77_2 Q77_3 Q77_4 Q77_5 Q77_6 Q77_7 Q77_8

Coding for this example . . . . . . . .

When the data is cleaned, the responses to Q76_1-8 and Q77_1-8 (i.e. all responses to Q76 & Q77)

are removed (set to blank or full stop), as the respondent selected some of 76_1-6, as well as Q76_8,

and the only option selected in Q77 was Q77_8, which means it is unclear which response options are

most likely to be correct for the respondent.

Page 43: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 10

Example 3

Q76. Do you have any of the following long-standing conditions? (Cross ALL boxes that apply) 1 Deafness or severe hearing impairment 2 Blindness or partially sighted 3 A long-standing physical condition 4 A learning disability 5 A mental health condition 6 A long-standing illness, such as cancer, HIV, diabetes, chronic heart disease, or epilepsy 7 No, I do not have a long-standing condition

Q77. Does this condition(s) cause you difficulty with any of the following? (Cross ALL boxes that

apply) 1 Everyday activities that people your age can usually do 2 At work, in education or training 3 Access to buildings, streets or vehicles 4 Reading or writing 5 People’s attitudes to you because of your condition 6 Communicating, mixing with others, or socialising 7 Any other activity 8 No difficulty with any of these

BEFORE CLEANING:

Q76 is coded as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Coding for this example 0 0 0 0 0 0 1

Q77 is coded as follows:

Column headings Q77_1 Q77_2 Q77_3 Q77_4 Q77_5 Q77_6 Q77_7 Q77_8

Coding for this example 0 1 0 0 1 0 0 1

AFTER CLEANING:

Q76 is coded as follows:

Column headings Q76_1 Q76_2 Q76_3 Q76_4 Q76_5 Q76_6 Q76_7

Coding for this example 0 0 0 0 0 0 1

Q77 is coded as follows:

Column headings Q77_1 Q77_2 Q77_3 Q77_4 Q77_5 Q77_6 Q77_7 Q77_8

Coding for this example . . . . . . . .

When the data is cleaned, the responses to Q77 are removed (set to blank or full stop) as the

respondent indicated they do not have a long-standing condition and therefore should not have

completed Q77.

Page 44: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 11

3.4 Dealing with demographics

Basic demographic information, including age, sex, and ethnicity of patients is included in the

sample section of the data, but the ‘About You’ section at the end of the questionnaire also asks

respondents to provide this information. In a minority of cases, the information provided from the

sample frame does not correspond with that provided by the respondent themselves – for example,

the sample data may identify an individual as male only for them to report being female (e.g.

Q78=2).

Because of this, and because questions about demographics tend to produce relatively high item

non-response rates, it is not appropriate to rely on either source of data alone for any kind of sub-

group analyses (for example, if you wanted to examine the response to particular question by age,

or ethnic group). Where responses to demographic questions are present, it is assumed these are

more likely to be accurate than sample frame information (since it is assumed that respondents are

best placed to know their own sex, age, and ethnic group). Where responses to demographic

questions are missing, however, sample data are used in their place1. To do this, we first copy all

valid responses to survey demographic questions into a new variable. Where responses are

missing we then copy in the relevant sample information (note that for a very small number of

patients demographic information may be missing in both the sample and response sections; in

such cases data must necessarily be left missing in the new variable).

Certain demographic variables require special consideration during data cleaning:

Age (Q79)

A common error when completing the year of birth question is for respondents to accidentally write

in the current year. Such responses will be set to missing during cleaning. Out-of-range responses

will also be set to missing2. For the 2015 national inpatient survey, out-of range responses are

defined as Q79≤1895 or Q79≥2015.

3.5 Usability and eligibility

Sometimes questionnaires are returned with only a very small number of questions completed.

For the 2015 inpatient survey, questionnaires where fewer than five questions have been

answered are considered ‘unusable’. In such cases, the responses to the few questions that have

been answered will be deleted and the outcome codes will be changed from a code of 1 (‘returned

usable questionnaire’) to a code of 6 (‘questionnaire not returned’). Please note that the number of

responses per questionnaire is counted after all other cleaning3. This process should only affect a

very limited number of cases, and so should not have a significant impact on response rates.

1 The exception to this is when response rates are calculated. Because response rates vary between

demographic groups (for instance young males are less likely to respond to the survey than other individuals), using response and sample data to calculate response rates would create a systematic source of bias in that we are only able to amend information for the respondents. Therefore, only the sample information should be used to calculate response rates by demographic groups.

2 The majority of out-of range responses present in data relating to year of birth questions result from errors

in data entry (for example, not keying one of the digits – so ‘1985’ may become 985, 185, 195, or 198). In such cases it is important that the responses be checked against the completed questionnaire forms, and data corrected if necessary, prior to submission of data to the Co-ordination Centre.

3 Please note that the multiple choice questions, Q76 and Q77 are only counted once. So for example, even

if Q76_1 and Q76_4 are crossed, this would count as only one response for the purpose of determining if a questionnaire is usable.

Page 45: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 12

Outcome codes for respondents will also be changed if respondents are believed to be under the

age of sixteen and therefore ineligible for participation. Since the sample files for the survey are

checked by the Co-ordination Centre prior to mailing, this is unlikely to affect more than a handful

of cases throughout the survey, as patients coded as being aged under 16 will be identified and

removed from the sample before the start of the survey. However, in situations where sample

information on a respondent’s year of birth is missing in the final data file, and their response to

Q79 indicates that they are under 16 (specifically, if Q79>1999) then the outcome code for that

patient should be recoded from 1 (‘returned completed questionnaire’) to 5 (‘ineligible for

participation in the survey’). If data on an individual’s year of birth is missing from the sampling

frame, but their response to Q79 indicates the respondent is over 16, outcome codes should

remain as 1. If sample information indicates a patient is aged 16 or over, but this is contradicted by

the patient’s response, then the patient’s survey outcome should also remain as 1. This is to avoid

removing legitimate responses because of an overly conservative approach to assessing eligibility;

in other words, where the patient’s age is uncertain (because sample and response information

contradict each other, and in different instances either of these may be accurate or inaccurate) the

benefit of the doubt is given in any assessment of eligibility.

3.6 Missing responses

It is useful to be able to see the number of respondents who have missed each question for

whatever reason. Responses are considered to be missing when a respondent is expected to

answer a question but no response is present. For non-filtered questions, responses are expected

from all respondents – thus any instance of missing data constitutes a missing response. For

filtered questions, only respondents who have answered a previous routing question instructing

them to go on to that filtered question or set of filtered questions are expected to give answers.

Where respondents to the survey have missed a routing question, they are not expected to answer

subsequent ‘filtered’ questions; thus only where respondents were explicitly instructed to answer

filtered questions should such blank cells be coded as missing responses.

The Co-ordination Centre codes missing responses in the data with the value 9991. For results to

be consistent with those produced by the Co-ordination Centre, missing responses should be

presented, but should not be included in the base number of respondents for percentages.

The Co-ordination Centre will suppress results at both national and trust level for questions that

have fewer than 30 respondents.

3.7 Non-specific responses

As well as excluding missing responses from results, the Co-ordination Centre also removes non-

specific responses from base numbers for percentages. The rationale for this is to facilitate easy

comparison between institutions by presenting only results from those patients who felt able to give

an evaluative response to questions. For Q72, when multiple numbers have been selected (i.e.

multicode) or selected a non-integer (i.e. circled between two numbers) this should be coded as

98. For a full listing of ‘non-specific’ responses in the 2015 national inpatient survey, please see

Appendix B.

1 This is an arbitrary value chosen because it is ‘out-of-range’ for all other questions on the survey.

Page 46: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 13

Appendix A: Example of cleaning

Incorrectly followed routing

Figure 2, below, shows hypothetical raw/uncleaned data for eight sample members, five of whom

have responded to the survey (Outcome = 1).

Figure 2 – Example ‘raw’/‘uncleaned’ data

Record Outcome Q1 Q2 Q3 Q4

Patient Record Number

Outcome of sending questionnaire (N)

Was your most recent hospital stay planned in advance or an emergency?

When you arrived at the hospital, did you go to the A&E Department (the Emergency Department / Casualty / Medical or Surgical Admissions Unit)?

While you were in the A&E Department, how much information about your condition or treatment was given to you?

Were you given enough privacy when being examined or treated in the A&E Department?

IP15...0001 6

IP15...0002 1 2 . . .

IP15...0003 1 1 1 1 2

IP15...0004 4

IP15...0005 1 2 2 . .

IP15...0006 6

IP15...0007 1 2 1 2 1

IP15...0008 1 3 2 1 1

It can be seen from the data shown in Figure 2 that some of the respondents have followed filter

instructions from routing questions incorrectly:

Respondents ‘IP15...0005’ and ‘IP15...0007’ have reported that their admission to hospital

was planned or from a waiting list (Q1=2), but have both responded to subsequent filtered

questions which are only applicable to emergency patients (respondent 5 has answered the

first filtered question (Q2) before skipping the remaining questions, whilst respondent 7 has

answered Q2, Q3, and Q4).

By following the cleaning instructions detailed above in Section 3.2, these inappropriate responses

will be removed. Firstly, the filter instructions listed in Figure 1 specify that:

if Q1 = 2 then delete responses to: Q2 – Q4

In accordance with this, all responses for Q2, Q3, and Q4 must be removed in cases where the

respondent has crossed Q1=2 (‘waiting list or planned in advance’). Looking in column Q1 of

Figure 2 we can see that three respondents, ‘IP15...0002’, ‘IP15...0005’ and ‘IP15...0007’, have

responded Q1=2, so any responses they gave to Q2, Q3 and Q4 need to be removed. This will

lead to one response being removed for respondent 5 (Q2) and three responses being removed for

respondent ‘7’ (Q2, Q3, and Q4).

Page 47: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 14

Figure 3 (below) shows how the data would look following cleaning by the Co-ordination Centre to

remove responses to filtered questions that should have been skipped (shaded cells represent

cases where responses have been removed).

Figure 3 – Example cleaned data

Record Outcome Q1 Q2 Q3 Q4

Patient Record Number

Outcome of sending questionnaire (N)

Was your most recent hospital stay planned in advance or an emergency?

When you arrived at the hospital, did you go to the A&E Department (the Emergency Department / Casualty / Medical or Surgical Admissions Unit)?

While you were in the A&E Department, how much information about your condition or treatment was given to you?

Were you given enough privacy when being examined or treated in the A&E Department?

IP15...0001 6

IP15...0002 1 2 . . .

IP15...0003 1 1 1 1 2

IP15...0004 4

IP15...0005 1 2 . . .

IP15...0006 6

IP15...0007 1 2 . . .

IP15...0008 1 3 2 1 1

Page 48: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 15

Appendix B: Non-specific responses

The following table lists all ‘non-specific responses’ included in the 2015 inpatient survey. Numbers

in the final column indicate the response options that should be considered non-specific. Where

the ‘non-specific responses’ column contains only a dash, the relevant question has no such

response options.

CORE Question

Non-specific

responses

Q1 Was your most recent hospital stay planned in advance or an

emergency? -

Q2

When you arrived at the hospital, did you go to the A&E Department

(the Emergency Department / Casualty / Medical or Surgical

Admissions unit)?

-

Q3 While you were in the A&E Department, how much information about

your condition or treatment was given to you? 5

Q4 Were you given enough privacy when being examined or treated in the

A&E Department? 4

Q5 When you were referred to see a specialist, were you offered a choice

of hospital for your first hospital appointment? 4

Q6 How do you feel about the length of time you were on the waiting list

before your admission to hospital? -

Q7 Was your admission date changed by the hospital? -

Q8

In your opinion, had the specialist you saw in hospital been given all of

the necessary information about your condition or illness from the

person who referred you?

4

Q9 From the time you arrived at the hospital, did you feel that you had to

wait a long time to get to a bed on a ward? -

Q10 While in hospital, did you ever stay in a critical care area (Intensive

Care Unit, High Dependency Unit or Coronary Care Unit)? 3

Q11

When you were first admitted to a bed on a ward, did you share a

sleeping area, for example a room or bay, with patients of the opposite

sex?

-

Q12 During your stay in hospital, how many wards did you stay in? 4

Q13

After you moved to another ward (or wards), did you ever share a

sleeping area, for example a room or bay, with patients of the opposite

sex?

-

Q14 While staying in the hospital, did you ever use the same bathroom or

shower area as patients of the opposite sex? 4, 5

Q15 Were you ever bothered by noise at night from other patients? -

Q16 Were you ever bothered by noise at night from hospital staff? -

Q17 In your opinion, how clean was the hospital room or ward that you

were in? -

Q18 How clean were the toilets and bathrooms that you used in hospital? 5

Q19 Did you feel threatened during your stay in hospital by other patients or

visitors? -

Q20 Were hand-wash gels available for patients and visitors to use? 4

Q21 How would you rate the hospital food? 5

Q22 Were you offered a choice of food? -

Q23 Did you get enough help from staff to eat your meals? 4

Q24 When you had important questions to ask a doctor, did you get

answers that you could understand? 4

Page 49: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 16

CORE Question

Non-specific

responses

Q25 Did you have confidence and trust in the doctors treating you? -

Q26 Did doctors talk in front of you as if you weren’t there? -

Q27 When you had important questions to ask a nurse, did you get answers

that you could understand? 4

Q28 Did you have confidence and trust in the nurses treating you? -

Q29 Did nurses talk in front of you as if you weren’t there? -

Q30 In your opinion, were there enough nurses on duty to care for you in

hospital? -

Q31 In your opinion, did the members of staff caring for you work well

together? 4

Q32 Sometimes in a hospital, a member of staff will say one thing and

another will say something quite different. Did this happen to you? -

Q33 Were you involved as much as you wanted to be in decisions about

your care and treatment? -

Q34 Did you have confidence in the decisions made about your condition or

treatment? -

Q35 How much information about your condition or treatment was given to

you? -

Q36 Did you find someone on the hospital staff to talk to about your worries

and fears? 4

Q37 Do you feel you got enough emotional support from hospital staff

during your stay? 4

Q38 Were you given enough privacy when discussing your condition or

treatment? -

Q39 Were you given enough privacy when being examined or treated? -

Q40 Were you ever in any pain? -

Q41 Do you think the hospital staff did everything they could to help control

your pain? -

Q42 How many minutes after you used the call button did it usually take

before you got the help you needed? 6

Q43 During your stay in hospital, did you have an operation or procedure? -

Q44 Beforehand, did a member of staff explain the risks and benefits of the

operation or procedure in a way you could understand? 4

Q45 Beforehand, did a member of staff explain what would be done during

the operation or procedure? 4

Q46 Beforehand, did a member of staff answer your questions about the

operation or procedure in a way you could understand? 4

Q47 Beforehand, were you told how you could expect to feel after you had

the operation or procedure? -

Q48 Before the operation or procedure, were you given an anaesthetic or

medication to put you to sleep or control your pain? -

Q49

Before the operation or procedure, did the anaesthetist or another

member of staff explain how he or she would put you to sleep or control

your pain in a way you could understand?

-

Q50 After the operation or procedure, did a member of staff explain how the

operation or procedure had gone in a way you could understand? -

Q51 Did you feel you were involved in decisions about your discharge from

hospital? 4

Q52 Were you given enough notice about when you were going to be

discharged? -

Page 50: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 17

CORE Question

Non-specific

responses

Q53 On the day you left hospital, was your discharge delayed for any

reason? -

Q54 What was the MAIN reason for the delay? 4

Q55 How long was the delay? -

Q56 Where did you go after leaving hospital? -

Q57 After leaving hospital, did you get enough support from health or social

care professionals to help you recover and manage your condition? 4

Q58 When you transferred to another hospital or went to a nursing or

residential home, was there a plan in place for continuing your care? 4

Q59

Before you left hospital, were you given any written or printed

information about what you should or should not do after leaving

hospital?

-

Q60 Did a member of staff explain the purpose of the medicines you were

to take at home in a way you could understand? 4,5

Q61 Did a member of staff tell you about medication side effects to watch

for when you went home? 4

Q62 Were you told how to take your medication in a way you could

understand? 4

Q63 Were you given clear written or printed information about your

medicines? 4, 5

Q64 Did a member of staff tell you about any danger signals you should

watch for after you went home? 4

Q65 Did hospital staff take your family or home situation into account when

planning your discharge? 4, 5

Q66 Did the doctors or nurses give your family or someone close to you all

the information they needed to help care for you? 4, 5

Q67 Did hospital staff tell you who to contact if you were worried about your

condition or treatment after you left hospital? 3

Q68

Did hospital staff discuss with you whether you would need any

additional equipment in your home, or any adaptations made to your

home, after leaving hospital?

3

Q69

Did hospital staff discuss with you whether you may need any further

health or social care services after leaving hospital? (e.g. services from

a GP, physiotherapist or community nurse, or assistance from social

services or the voluntary sector)

3

Q70 Overall, did you feel you were treated with respect and dignity while

you were in the hospital? -

Q71 During your time in hospital did you feel well looked after by hospital

staff? -

Q72 Overall... 98

Q73 During your hospital stay, were you ever asked to give your views on

the quality of your care? 3

Q74 Did you see, or were you given, any information explaining how to

complain to the hospital about the care you received? 3

Q75 Who was the main person or people that filled in this questionnaire? -

Q76_1 I have a long-standing condition involving deafness or severe hearing

impairment -

Q76_2 I have a long-standing condition involving blindness or partial sight -

Q76_3 I have a long-standing physical condition -

Q76_4 I have a long-standing condition involving a learning disability -

Q76_5 I have a long-standing mental health condition -

Page 51: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

20151028 IP15 DataCleaning Instructions Final [Picker Institute Europe. All rights reserved 2015.] Page 18

CORE Question

Non-specific

responses

Q76_6 I have a long-standing condition involving an illness such as cancer,

HIV, diabetes, CHD, or epilepsy -

Q76_7 I do not have a long-standing condition -

Q77_1 This condition causes me difficulty with everyday activities that people

of my age can usually do -

Q77_2 This condition causes me difficulty at work, in education, or training -

Q77_3 This condition causes me difficulty with access to buildings, streets, or

vehicles -

Q77_4 This condition causes me difficulty with reading or writing -

Q77_5 This condition causes me difficulty with people's attitudes to me

because of my condition -

Q77_6 This condition causes me difficulty with communicating, mixing with

others, or socialising -

Q77_7 This condition causes me difficulty with other activities -

Q77_8 This condition does not cause me difficulty with any of these -

Q78 Are you male or female? -

Q79 What was your year of birth? -

Q80 What is your religion? -

Q81 Which of the following best describes how you think of yourself? -

Q82 What is your ethnic group? -

Page 52: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

NHS National Patient Survey Programme: data weighting issues 1. Introduction The following key outputs are produced on most of the surveys carried out on the NHS National Patient Survey Programme each year:

A Statistical release report that summarises the key findings at national level. Trust level tables presenting the percentage of responses for all questions on

the survey plus national response totals for England. Benchmark reports that compare the results of each NHS trust with the results

for other trusts. Performance indicators for use on the annual NHS performance rating.

Weighted data have been used to produce the key findings report and the national totals displayed in the trust level tables since 2003/4. The benchmark reports and performance indicators have always been derived from weighted data. This document describes the approach taken to weighting the data presented in the Statistical release report/Open data and the national totals displayed in the trust level tables on the surveys listed below.

Acute trust inpatient survey, Acute trust outpatient surveys, Acute trust emergency department surveys, Acute trust Children and young people survey, Primary Care Trust (PCT) patient surveys, Ambulance trust survey, Mental health trust service user surveys.

The weighting method used to derive performance indicators is described in a separate document specific to each survey. Those documents description the derivation of performance indicators have been included in the survey documentation deposited with the UK Data Archive. 2. Samples In each of these surveys, the vast majority of trusts sampled 850 patients1. Different sampling methods were chosen for different surveys because of the particular constraints of the sampling frame to be used in each case: sampling methods used are summarised in Table 1.

1 In a few exceptional cases trusts were unable to sample 850 recent patients because of their low throughput of patients. Where this occurred, trusts were requested to contact the NHS Surveys Advice Centre and smaller sample sizes were agreed.

Page 53: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

Table 1 Summary of sampling methods Survey Sampling method Inpatients 1250 (850 all before 2015) consecutively discharged patients aged

16+ Outpatients Systematic sample* of outpatient attendances during a reference

month by those aged 16+ Emergency Department

Systematic sample* of emergency department attendances during a reference month by those aged 16+

Children and young people

850 consecutively discharged patients: overnight and day cases of those aged 8-15

PCT Systematic sample* of GP registered patients aged 16+ Ambulance trusts Multi-stage sample involving systematic and simple random

sampling of patients aged 16+ attended during a reference week. Mental health trusts

Simple random sample of service users aged 16-64 on CPA who were seen during a three-month reference period

Further details of survey populations and sampling methods can be found in the guidance notes for individual NHS patient surveys at www.nhssurveys.org. It is worth noting that the sampling method used determines the population about which generalisations can be made. Different approaches were taken in the different surveys, meaning that results generalise to correspondingly different types of population. For the surveys of inpatients and young inpatients, the survey populations comprised flows of patients attending over particular time periods (ie the population is one of people attending), whereas for the outpatients, mental health services users, and ambulance trusts and Emergency Department surveys the survey populations comprised attendances over particular time periods. The PCT survey population comprised the stock of all GP registered patients. Below we point out some of the implications of these differences. Patients v. attendances: the difference between attendances and patients as used here may be understood by comparing two hypothetical equal sized groups of patients: group 1 patients attended once during the reference period and group 2 patients attended twice. In such a situation, a sample based on patients will represent the two groups equally, whereas a sample based on attendances will deliver twice as many from group 2 as from group 12. In other words, frequently attending patients will have a greater impact on results where samples are based on attendances than where they are based on unique patients. Stock v. flow: for a stock sample attendance frequency will have no bearing on the results. For a flow sample the make-up of the survey population will depend upon the length of the reference period used, such that relatively infrequent attendees will make up larger proportions of the sample (and hence survey population) with longer * This involves sorting the sample frame based on some critical dimension(s) – eg age – and selecting units at fixed intervals from each other starting from a random point. For more detailed information, see the survey guidance documents for individual surveys. 2 This is a slight simplification as it assumes a with-replacement sampling method. This does not, however, affect the essential point.

Page 54: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

reference periods. In other words, if a survey uses a flow sample with a short reference period, its results will be less influenced by the experiences of infrequent attendees than they would have been had a longer reference period been used3. 3. Weighting the results Weighting to trust and patient populations In the Statistical release report/Open data and the national totals displayed in the trust level tables of surveys in the NHS National Patient Survey Programmes, patient data were weighted to ensure that results related to the national population of trusts. The aim of this was to give all trusts exactly the same degree of influence when calculating means, proportions and other survey estimates. National estimates produced after weighting in this way can be usefully regarded as being estimates for the average trust: this was felt to be the most appropriate way to present results at a national level. However, it is worth noting that an alternative approach could have been taken, namely to weight to the national population of patients. This would be the appropriate approach to take if the primary interest had been to analyse characteristics of patients rather than characteristics of trusts. Weighting to the population of trusts ensures that each trust has the same influence as every other trust over the value of national estimates. If unweighted data were used to produce national estimates, then trusts with higher response rates to the survey would have a greater degree of influence than those who received fewer responses. Had we weighted to the national population of patients, a trust’s influence on the value of a national estimate would have been in proportion to the size of its eligible patient population4. 4. Illustrative example To illustrate the difference between the two approaches, we have devised a simple fictitious example concerning the prevalence of smoking in three universities, A, B and C, situated in a single region. This is shown in table 2. Table 2 Students and smoking

University No. students

Proportion smoking

A 10000 0.2B 8000 0.3C 1000 0.6Regional total 19,000

3 It is worth noting that, conceptually, a stock sample can be regarded as a flow sample with an infinite reference period, so long as all registered patients have a non-zero probability of attending. 4 For example, for the ambulance survey this would be the number of attendances of eligible patients aged 16+ during the reference week.

Page 55: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

If we were interested in knowing the smoking prevalence of the average university, we would take the simple mean of the three proportions: 1... prevalence in average university = (0.2 + 0.3 +0.6)/3 = 0.3667. If, on the other hand, we were interested in knowing what proportion of students smoked in the region we would have to multiply each university’s proportion of smokers by its student population to give an estimate of total smokers in the university, sum these totals across universities and divide by regional student total: 2... regional prevalence = ((0.2*10000) + (0.3*8000) + (0.6*1000))/19000

= 0.2632. 5. Weighting for national level patient survey estimates As stated above, for estimates from the NHS National Patient Survey Programme, we were interested in taking the equivalent to approach 1 rather than 2. This could have been done in one of two ways:

a. analyse a dataset of trusts and apply no weight – this would entail calculating estimates for each trust and then taking means of these estimates.

b. analyse a dataset of patients after weighting each case – weights must be calculated to ensure that each trust has the same (weighted) number of responses for each item.

These two approaches produce identical estimates, but the latter method is the one used on the 2004/5 national patient surveys (the former approach was used on the 2003/04 surveys). In order to use weights to eliminate the influence of variable response rates, it is necessary to base them on the inverse of the number of responses for each trust, such that the weight for each trust is equal to k/niq where:

k is a constant niq is the number of responses to question q within trust i).

Although k may take any value, in practice it is set to the mean number of respondents answering the relevant question in all trusts because this equalises weighted and unweighted sample sizes for the national level results. Thus, the formula used to calculate weights can be expressed as:

Example of weighting to the trust population By way of example, in table 3 we have three trusts, X, Y and Z in a particular area: in each trust a different number of patients responded and in each a different estimate of proportion of patients who didn’t like the food they were given was obtained.

iq

qiq n

nw

Page 56: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

Note first, that if these data were held in a trust level dataset (ie with one record per trust) we would have calculated the simple unweighted trust-based mean as:

trust mean = (0.2 +0.23 +0.3) / 3 = 0.2433 Table 3 Weighting to trust population Trust 1

No. responders to food question in trust (niq)

2 Proportion of respondents disliking the food

3 Weight

1 * 2 * 3

1 * 3

X 600 0.2 0.7778 93.33333 466.6667Y 500 0.23 0.9333 107.3333 466.6667Z 300 0.3 1.5556 140 466.6667All 1400Mean 466.6667 However, in practice we often apply a weight in a patient level dataset instead. In the table 3 above, we have calculated the weight as: trust weight = (mean value of niq)/ niq. For example the weight for trust X is calculated as 466.6667/600 = 0.7778. By applying these weights (eg by using the SPSS “weight by” command) when running tables showing proportion of patients disliking the food, we obtain the simple trust based means. The way this works when calculating the proportion can be seen below: numerator for proportion = (600 * 0.2*0.7778 ) + (500 * 0.23 * 0.9333)

+ (300 * 0.3 * 1.5556) = 340.6667 denominator for proportion = (600 *0.7778 ) + (500 * 0.9333)

+ (300 * 1.5556) = 1400 Estimate = 340.6667 / 1400 = 0.2433 As can be seen, this is same as the simple mean calculated using a trust-level dataset shown above. If we did not weight, our estimate would be 325 / 1400 = 0.2321. In other words, the overall estimate would be dragged towards the estimates for those trusts with larger numbers of respondents.

Page 57: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

Dealing with missing data and filtered questions The weighting method outlined above involves the calculation of weights for each combination of trust and question. An alternative might have been to simply calculate a single weight per trust where trust weight = (mean value of nicases)/ nicases (where nicases is the of total number of responding cases in trust i). This would be a simpler approach to implement, as it would involve substantially fewer calculations and different weights would not have to be applied for each question. In spite of this, it was considered inappropriate to use this simpler method because the number of responses varies betweens questions. Numbers of responses for different questions vary because not every respondent will answer every question. The largest source of variance is filtering – the surveys frequently include ‘filter’ questions that direct patients to answer only the parts of the questionnaire which are relevant to them. For example, a patient may be prompted to skip questions on medicines if they have not used any in the past year. Patients may also fail to answer a particular question either in error, because they refused, or because they were unsure how to answer. Similarly, responses may be missing because a patient has given multiple responses for a question. For these reasons we often find that, in practice, the number of respondents answering a particular question in trust i (niq) is less than nicases. If the proportion of respondents answering a particular question varies across trusts, then applying the trust weight as defined in the last paragraph will not give each trust exactly the same level of influence on the survey estimate. Generally, this variation should be trivial for well constructed and well laid out unfiltered questions, because the great majority of respondents will answer them in all trusts. However, the variation may in some cases become too great to ignore, particularly where questions are filtered. This is a particular issue where the numbers of people within a trust responding in certain ways to a ‘filter’ question are likely to be related to the type of trust – for instance, some specialist acute hospitals might have a very high proportion of patients responding to questions about elective admissions, but few or none responding to questions about emergency admission. Clearly, in such cases, using a single set of weights for all questions would be insufficient. For other applications users may be content to calculate a weight based upon nicases. If there is no substantial variation in the proportion of respondents answering questions of interest across trusts, this approach will deliver very similar results to those obtained using niq. Likewise, if the number of people being filtered past or skipping questions is of interest, it is possible to include these outcomes as ‘dummy’ responses for each question and therefore analyse data from different questions whilst retaining a constant base and thus ensuring all trusts have an equal degree of input. What weight should be used? Weighting to the trust population provides the most appropriate national estimates for trust comparisons. It is however, not the most appropriate approach for many other purposes. If the main area of interest relates to patients rather than trusts, it will be necessary to weight data to the national population of patients. This will require the calculation of new weights. Examples of what we mean by areas of interest are shown below:

Page 58: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

Patients Trusts What proportion of patients

nationally felt that the toilets and bathrooms were not very or not at all clean?

What proportion of patients in the average trust felt that the toilets and bathrooms were not very or not at all clean?

Were males or females more likely to say that toilets and bathrooms were not very or not at all clean?

Were small acute trusts more or less likely than medium / large acute trusts to have patients who said that toilets and bathrooms were not very or not at all clean?

Calculating patient population weights Although patient population weights have not been calculated, users may well need to use these for some of their analyses. These should be calculated as:

patient population weight = (k * Ni)/ nicases,

where: nicases is the number of respondents in trust i5, Ni is the number eligible patients in the survey population in trust i, k is a constant, which is usually set so as to equalise the overall weighted and unweighted sample sizes.

Probably the main difficulty in calculating this weight will be obtaining a reliable figure for Ni. Ni is the population to which each trust’s results are to be generalised. Ideally this should be the size of the population from which the sample was actually selected. For example, for ambulance trusts, Ni would ideally be the total number of attendances during the exact reference week (ie the number of cases from which the sample of 850 was actually drawn). However, we acknowledge that this information is unlikely to be available, and it will therefore be necessary to substitute an estimate instead. In doing this it should be borne in mind that the definition of the population from which the estimate of Ni will be derived should be as close as possible to the definition of the population from which the sample was actually selected. For example, the trust population figures used to calculate weight Ni for the PCT surveys should relate to the stock of patients and not the flow of patients or attendances; a flow sample should, ideally, be weighted to a population using the same reference period (eg the Emergency Department data should be weighted to monthly throughput). Furthermore the population figures used for weighting should, of course, relate to the same year (at least!) as that in which the survey was conducted. Of course, if there is a dearth of available population information, non-ideal population data have to be used. If this is the case, it is worthwhile spelling out the additional assumptions that will, by implication, have to be being made. For example, 5 In principle it would be possible to use niq in this formula for unfiltered questions (it could not be done for filtered questions because this would require us to substitute number in the population eligible for the filter question – an unknown value - for Ni). To our knowledge, in practice this approach is never taken.

Page 59: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

if inpatient data are weighted to inpatient attendance figures instead of patient flows, an implicit assumption is being made that the proportion of patients making n attendances over the reference period is constant across trusts6. Use of unweighted data If a user decides simply to analyse unweighted data, the implications of so doing need to be understood. Given the sampling methods used, an unweighted sample would deliver approximately equal numbers of responses if response rate did not vary widely between trusts. In effect this would mean that the sample would be approximately equivalent to one weighted by: trust weight = (mean value of nicases)/ nicases

As such, it could be regarded as crudely representing the population of trusts (crudely, because in practice response rates did vary, and as a result trusts with good response rates would have greater influence on the results that would trusts with poor response rates). It would, however, be wholly inappropriate for analyses of patients. This is because, unweighted, the data will substantially under-represent patients in trusts with large numbers of patients, and substantially over-represent patients in trusts with small numbers of patients. To the extent that that large and small trusts differ systematically from one another on survey variables, the use of unweighted data will introduce systematic bias into the results.

6 An added (but, in practice, trivial) complication is that for the inpatient and young patient surveys there is no “perfect” definition for a population data reference period. This is because the sampling method itself used a variable reference period: trusts with large patient throughputs used shorter reference periods than trusts with smaller throughputs.

Page 60: Care Quality Commission (CQC) Technical details patient ...doc.ukdataservice.ac.uk/...ip15_technical_document.pdf · This document outlines the methods used by the Care Quality Commission

I:\dcmagd\8062\noissue\original\mrdoc\word\20160628_ip15_non_survey_variable_definitions.doc Created on 6/28/2016 3:33:00 PM 1

Non survey variable definitions: 2015 Inpatients Survey Data

Variable name Description Values Trustcode Department of Health

Trust code (please see 20160602 IP15 Trust participating in IP15 for the name of trusts)

LOS Length of stay (in days) - Age4group age grouped: Based on

respondent provided information, or where missing, this information was taken from the sample file.

16-35=1 36-50=2 51-65=3 66+=4

Sex-full Based on respondent provided information, or where missing, this information was taken from the sample file.

male=1 female=2

PSAweight Age / Gender / Route of Admission standardisation

-