6
Anna Schmidt, RN, Carol Deets, RN Responsibility for audit criteria The first three articles in this series on nursing audit criteria have shown how to develop measurable process and out- come criteria based on American Nurses’ Association (ANA) and AORN standards of nursing practice.’ Infor- mation presented in those articles should enable nurses to begin develop- ing criteria to evaluate nursing care. In this final article, we examine some is- sues that influence professional audits, in particular nursing audits. Is it important to determine which pro- fessional group is primarily responsible for the achievement of each criterion? The answer is, of course, yes. Each profession is legally and ethically ac- countable for its actions, for the results of those actions, and, therefore, for the corresponding criteria. If a structure, Anna Schmidt, RN, PhD, is an assistant pro- fessor at Indiana University in Indianapolis. She received her BSN, MSN, and PhD in nurs- ing from the University of Texas School of Nursing at Austin. Carol Deets, RN, EdD, was associate profes- sor and director of the Center for Health Care Research and Evaluation at the University of Texas at Austin when this article was written. A diploma graduate of Presbyterian Hospital School of Nursing, Charlotte, NC, she received her BS from Queens College in Charlotte and her MS and EdD from Indiana University in Bloomington. Currently. she is an associate professor at Indiana University School of Nurs- ing in Indianapolis. process, or outcome criterion is not suc- cessfully achieved, then the reason must be identified so that the problem can be corrected. The professional group whose members should have achieved the criterion or assisted a patient to achieve a criterion should be the group to identify the problem and take correc- tive action. How do you determine which profes- sion is primarily responsible for the achievement of a criterion? Some criteria may be the responsibil- ity of one profession, whereas other criteria may represent a shared respon- sibility of two or more professions. Since the problem of shared responsibility seems to occur primarily with outcome criteria, it is best to consider structure, process, and outcome criteria sepa- rately. It is usually easy to determine which profession is primarily responsible for the achievement of a structure crite- rion. Agency or institutional policies and state practice acts provide guidelines for defining each profession’s area of re- sponsibility. In developing structure criteria, each profession needs to de- velop realistic criteria that are within the existing agency policies. For exam- ple, on the basis of an agency’s policy to provide personnel for a specialty unit, the following structure criterion could be generated: “There are two or more nurse clinicians for each surgical suite.” Sometimes several criteria can be gen- AORN Journal, March 1978, Vol27, No 4 657

Responsibility for audit criteria

Embed Size (px)

Citation preview

Anna Schmidt, RN, Carol Deets, RN

Responsibility for audit criteria

The first three articles in this series on nursing audit criteria have shown how to develop measurable process and out- come cr i te r ia based on Amer ican Nurses’ Association (ANA) and AORN standards of nursing practice.’ Infor- mat ion presented in those ar t ic les should enable nurses to begin develop- ing criteria to evaluate nursing care. In this final article, we examine some is- sues tha t influence professional audits, in particular nursing audits.

Is it important to determine which pro- fessional group is primarily responsible for the achievement of each criterion?

The answer is, of course, yes. Each profession is legally and ethically ac- countable for its actions, for the results of those actions, and, therefore, for the corresponding criteria. If a structure,

Anna Schmidt, R N , PhD, is an assistant pro- fessor at Indiana University in Indianapolis. She received her BSN, MSN, and PhD in nurs - ing from the University of Texas School of Nursing at Aust in .

Carol Deets, RN, EdD, was associate profes- sor and director of t h e Center for Health Care Research and Evaluation at the University of Texas at Austin when th is article was written. A diploma graduate of Presbyterian Hospital School of Nursing, Charlotte, NC, she received her BS from Queens College in Charlotte and her MS and EdD from Indiana University in Bloomington. Currently. she is an associate professor at Indiana University School of N u r s - ing in Indianapolis.

process, or outcome criterion is not suc- cessfully achieved, t hen the reason must be identified so that the problem can be corrected. The professional group whose members should have achieved the criterion or assisted a patient to achieve a criterion should be the group to identify the problem and take correc- tive action.

How do you determine which profes- sion is primarily responsible for the achievement o f a criterion?

Some criteria may be the responsibil- ity of one profession, whereas other criteria may represent a shared respon- sibility of two or more professions. Since the problem of shared responsibility seems to occur primarily with outcome criteria, i t is best to consider structure, process, and outcome cr i ter ia sepa- rately.

It is usually easy to determine which profession is primarily responsible for the achievement of a structure crite- rion. Agency or institutional policies and s ta te practice acts provide guidelines for defining each profession’s area of re- sponsibility. In developing s t ructure criteria, each profession needs to de- velop realistic criteria t ha t are within the existing agency policies. For exam- ple, on the basis of a n agency’s policy to provide personnel for a specialty unit, the following structure criterion could be generated: “There are two or more nurse clinicians for each surgical suite.” Sometimes several criteria can be gen-

AORN Journal, March 1978, Vol27, No 4 657

erated from one policy, whereas some policies may serve as structure criteria without res ta tement . For instance, when the hospit,al has a policy tha t all patients must have a chest x-ray upon admission, t h i s policy, wi thout re- statement, could be a structure crite- rion outlining expectations of the medi- cal staff.

Process criteria, as statements of practice, will be achieved by the group whose practice they describe. If nurses feel that a process criterion is their re- sponsibility, they will include tha t pro- cess criterion in their set of criteria. The same would be t rue for the medical staff. For example, nurses do not prescribe medications. Therefore, they would not include a criterion related to prescrip- t ion of medicat ions i n t h e process criteria they use for evaluation. Medical criteria would cover this function. If the dietician is responsible for teaching the patient about his diet, then the dietary staff would develop a criterion of tha t nature. The nurses, knowing tha t di- etary staff teach patients about diet, may not duplicate that criterion. Howev- er, the nurses could develop a process criterion such a s “The nurse notifies the dietician when the patient is ready for dietary instruction.”

Nurses, physicians, and other health care personnel should be aware of each other’s activities and criteria in order to provide the most efficient patient care. As in the above example, nurses should know that the dietician teaches the pa- tient about diet so tha t they will not duplicate the effort but will arrange for the dietician to meet with the patient.

Since each profession develops i ts own criteria, each has the opportunity to develop criteria that measure the ac- tivities they perform. For example, if nurses believe that turning the patient every two hours is an activity for which they are responsible and accountable, then they will probably include an ap-

propriate criterion in their audit tool. Similarly, a process criterion tha t the physicians feel is their responsibility will probably be used in the medical au- dit. The different health team members should be knowledgeable of each other’s process criteria, paying particular at- ten t ion to a n y cr i te r ia dupl icated among professions. Any duplicated criteria should be discussed by the pra- fess ionals involved t o de te rmine whether the criterion represents an ac- tivity tha t is performed routinely by two or more professions.

An agency’s policies also directly in- fluence process criteria. What is the re- sponsibility of the nurse in one agency may be someone else’s responsibility in another agency. For instance, in some institutions the nurse is responsible for admitting patients, whereas in other institutions the clerical staff has this responsibility. The defining charac- teristics of what a nurse does change slightly from institution to institution; nevertheless, t he major components t ha t make up the process of nursing are easily recognized.

It is often more difficult to determine which profession is primarily responsi- ble for the patient’s achievement of an outcome. Few professions have vali- dated the results of the process(es1 they use. Also, i t is so difficult to determine primary responsibility for achieving outcome criteria, because when out- come criteria a re used, only the end product is observed. There is seldom any reference to the means by which the end product was achieved. It is possible that several professional groups could ad- vance persuasive arguments as to why they are primarily responsible for the patient’s achievement of a n outcome criterion. For example, the physician orders the drugs tha t counteract the bacteria; the inhalation therapist ad- ministers treatments that improve the patient’s ability to expectorate; and the

658 AORN Journal, March 1978, Vol27, No 4

criterion is of little A value if it is routinely achieved by all patients.

nurse gives the drugs and forces fluids, which helps to liquefy secretions. In this case, which profession has primary re- sponsibility for helping the patient meet the criterion “Lungs free of con- gestion”?

The direct actions that are or could be taken by a professional group may be the best means for determining respon- sibility. For example, consider the fol- lowing patient population and the cor- responding sample outcome criterion:

Diagnosis: bronchopneumonia male age 25 t o 45 no history of kidney disease

Outcome criterion: The patient voids a t least 720 cc per 24 hours.

There are many ways in which the nurse can influence the achievement of this outcome. Unless contraindicated, the nurse may force fluids. Ifthe patient is bedridden, the nurse will probably offer the patient the bedpan or urinal a t frequent intervals. She may even run water from the tap to “stimulate” the patient to void. On the other hand, the physician may prescribe a diuretic. In this instance, even though the nurse administers the medication, there is no doubt that the physician is primarily responsible for whether the outcome is met since he or she prescribed the medi- cation. In addition, the patient himself can greatly influence whether the out-

come is achieved. He may refuse to drink liquids or even refuse to attempt to void. In this example, assuming that a diuretic is not ordered, the nurse has primary responsibility for achieving the desired outcome. After all, it is the nurse who has the interpersonal skills to teach the patient the importance of forcing fluids or attempting to void.

How many criteria should one use to evaluate nursing care?

The best answer is enough to measure nursing care reliably and validly. The number needed depends on the patient population and the quali ty of the criteria used to evaluate the care. For some patient populations, four or five screening criteria may be sufficient. For others, 20 o r 30 criteria may be needed to determine whether quality nursing care was given. ANA has defined a “screening criterion” as one that if not met, may indicate a severe deficiency in the nursing care that was provided.2 Hegyvary, Gortner, and Haussmann stated that a criterion is of little value in measuring quali ty of care if i t is routinely achieved by al l patients. Neither is i t of any value if i t is never met by any patients. Such a criterion does not successfully discriminate be- tween those patients who have received good care and those who have A criterion that functions as a screening criterion can provide evidence of good nursing care. Therefore, four or five valid, reliable screening criteria may be

AORN Journal, March 1978, Vol27, No 4 659

sufficient for accurately determining when good nursing care has been pro- vided.

To answer the question of how many criteria are needed, the quality of the criteria must first be determined. If your criteria have been demonstrated to be reliable and valid, you will need only a few.

How are reliability and validity de- termined?

Reliability and validity are impor- tant considerations whenever anyone is using a measurement tool. The checklist of criteria that is used when an audit is conducted is, in reality, a tool. Knowing the reliability of this tool and the validity of the criteria that make up the tool will allow you to answer the question, “Are these quality criteria?”

Basically, the concept of reliability has to do with consistency. Does the use of the tool result in consistent data? Do several people using the same tool or one person using the tool several times consistently obtain the same measures or ratings of the same stimuli (in this case, the same audit results from the same set of charts)? While these ques- tions differ slightly, the idea of consis- tency is basic to both.

When one is concerned with several individuals’ measures of t h e same stimuli or chart, the concept is known as “rater reliability.” Rater reliability, the degree to which several people obtain the same results, is an important con- sideration. If rater reliability is not high, then one must be concerned about the accuracy of the audit results. In other words, which auditor is most ac- curate?

Any time more than one person is in- volved in data collection, an estimation of rater reliability should be obtained. The following steps outline a simple procedure that may be followed to de- termine rater reliability using one’s audit criteria for a specific population.

1. Create a sample chart repre- sentative of a real chart of a pa- t ient who fits the population characteristics. If one is conduct- ing a nursing audit using this pro- cedure, then one need not create a complete sample chart. Only the nurse’s notes are necessary. The physician’s orders, progress notes, and lab reports are not necessary for this procedure.

2. Throughout the chart, place nurs- ing notes that clearly state that the outcome has been met, clearly state that the outcome has not been met, or do not give a clear indication either way.

3. Have several people “audit” this chart using the criteria that have been created.

The data collected from this sample audit can be analyzed statistically using Hoyt’s analysis of variance4 or the coefficient of agreement.j The more people who take part, the more accurate the estimate of reliability will be.

As stated above, criteria must be valid as well as reliable, because valid- ity is the degree to which the tool, in this case the audit instrument, measures what it is supposed to measure. There are two ways to demonstrate validity for audit instruments: by content and by criterion reference.

Content validity is judgment by a group of experts and it is relatively easy to determine. Once the criteria for a population are written in measurable terms, they are submitted to a group of nurses considered to be experts in that area. Since the question being asked is “Do these criteria measure quality nursing care for this population?’ it is essential that the nurses be highly qual- ified and routinely care for patients in that particular population. The experts are asked to indicate whether, in their judgment, each criterion measures quality nursing care for the specified

660 AORN Journal, March 1978, Vol27, No 4

population. Responses can be a simple yes or no or scores on a continuum rang- ing from strongly agree to strongly dis- agree. One then calculates the degree of agreement for each criterion. The final task is to determine what level of agreement is needed before a criterion is judged to measure quality nursing care. A score of 80% or 90rk for each criterion is usually considered a n ac- ceptable level of agreement.

Criterion-referenced validity is more difficult to obtain but is a much better indicator. In this kind of judgment, ex- ternal criteria are used to measure suc- cess. The important factor is to identify an appropriate indicator for the concept being measured, in this case quality nursing care. This indicator is different from the audit criteria. Examples of such indicators are length of stay in the hospital, experts’ ratings of care based on the chart’s documentation, ratings of care by the patient or physician, return to the hospital as a result of complica- tions, and quality-of-care ratings based on process or structure criteria. As can be seen by this list, one indicator seldom produces a direct, one-to-one relation- ship between the indicator and concept. One may wish to use two or three indi- cators in a n effort to measure the con- cept accurately. For instance, to mea- sure the concept of quality care, indi- cators such as length of stay and pa- tient’s ratings of care may need to be combined to be sure that quality care was given.

While there are several statistical means to arrive at a n estimate of criterion-referenced validity, the easiest method for validating an audit instrument is to use an expectancy ta- ble. First, select an indicator of care such as those listed above. Next, deter- mine the number of criteria that were met and a measure of the indicator for each chart. Then place this information in a n expectancy table. To do this, make

Table 1

Comparison of audit scores and client’s rating of care

a/a of cri- teria met

100 90 to 99 80 to 89 70 to 79 60 to 69

59

Client’s rating of care Very Not very good Good ~~~ ~ OK good

20 10 0 0 18 12 2 0 14 10 10 0

8 10 14 2 0 8 20 4 0 2 20 10

a scale for the percentage of criteria met on the left margin of the table, with the scale for your indicator across the top of the table (see Table 1). After the data are filled in, see if there is an observable trend. Often, the dividing points on the percentage of criteria scale are arbitrar- ily determined. For instance, if a chart has a large number of criteria that were met, one would expect a high rating of care by the patient (see Table 1). Charts achieving a 100% rating a t audit should have been the charts of patients who rated their care as “very good” or “good.” As the number of criteria met decreased, the patient’s rating of his care should also decrease.

On the basis of the data in Table 1, it seems safe to conclude that there is a high relationship between the ratings of care made by the patient and the audit results. However, before concluding that the criteria were valid, a number of other comparisons using data from another set of charts should be made. There are several more sophisticated techniques for estimating criterion- referenced validity, but since this mate- rial is available in other sources, i t will not be discussed here.6

Summary. Screening criteria repre- sent critical factors in nursing care.

AORN Journal, March 1978, Vol27. No 4 66 1

Only when cr i ter ia have been demon- strated to be bo th rel iable and valid can t h e y be considered t r u e s c r e e n i n g criteria. If cr i ter ia are demonstrated to be rel iable and valid indicators of qual- i ty nursing care, they can be used to define t h e components of nurs ing prac- tice. T h i s can o n l y be accomplished through r igorous test ing and retest ing

0 by a var ie ty of procedures.

Notes 1 Anna Schmidt, Carol Deets, "Writing measur-

able nursing audit criteria," AORN Journal 26 (Sep- tember 1977) 495-499, Carol Deets, Anna Schmidt, 'Process criteria based on standards," AORN Jour-

nal 26 (October 1977) 685-691. Carol Deets, Anna Schmidt, "Outcome criteria based on standards,"

AORN Journal 27 (February 1978). 2. American Nurses' Association, Guidelines for

Review of Nursing Care at the Local Level, 1975. 3. S Hegyvary, S Gortner, R K Haussmann, "De-

velopment of criterion measures for quality of care: The Rush-Medicus experience," paper presented at the ANA National Invitational conference, "Issues in Evaluation Research," Tucson, Ariz, 1975.

4. C J Hoyt, "Test reliability estimated by analyses of variance," in Principles of Educational and Psychological Measurement, W A Mehrens, R L Ebel, eds. (Chicago: Rand McNally & Co, 1967).

5. M G Kendall, "Rank correlation methods," in Fundamental Statistics in Psychology and fduca- tion, J P Guilford, Benjamin Fruchter, eds. (New York: McGraw-Hill, 1973).

6. N E Gronlund, Measurement and Evaluation in Teaching (New York: Macmillan Publishing Co, 1976).

Medical school enrollment increases Enrollment in the 11 6 medical schools in the United States increased by 2,022 during 1976-1 977, bringing total enrollment to 58,226, according to the American Medical Association's (AMA) 77th annual report on medical education published in the Journal of the American Medical Association.

First-year enrollment increased from 15,351 in 1975-1976 to 15,667 in 1976-1977, the AMA reports. The number of graduates increased from 13,561 to 13,607. The total number of women in medical schools in 1976-1 977 was 13,059, an increase of 1,532 over the previous year.

totaled 41,394, for a ratio of one teacher for each 1.4 students. In addition, more than 80,000 physicians and others taught part time.

The total enrollment of 15,667 first-year students was selected from a total of 42,155 applicants. For the second time in two years, the number of applicants declined slightly from the peak of 42,624 in 1974-1975. Each applicant applied to an average of nine schools at the same time, hoping for acceptance by at least one. By 1981 -1 982, the 11 6 medical schools project

In 1976-1 977, full-time faculty members

a first-year class of more than 16,000, with more than 16,000 graduates each year. Some additional medical schools will be in operation by that time.

Ethnic minorities enrolled in US medical schools in 1976-1 977 totaled 4,841, a percentage of 8.2%. A total of 494 US students in foreign medical schools transferred to American schools with advanced standing at various levels. In graduate medical education, there was a decrease in the number of foreign graduates serving in house staff positions in US hospitals. Total at the start of 1977 was 15,097. There were 42,903 graduates of US medical schools serving as interns or residents.

662 AORN Journal, March 1978, Vol27 , No 4