Observational Methods for Studying Medication Administration Errors
Observational Methods for Studying Medication Administration Errors
The validity and reliability of observational methods for studying medication administration errors (MAEs) were studied.
Between January and June 1998, two pharmacists observed consecutive drug administration rounds by nurses on two wards in a U.K. hospital and recorded all MAEs identified. The observers intervened in cases of potentially harmful errors. MAE records were audited to determine the percentage of omitted doses for which a corresponding reason was documented for the observation periods and for nonobservation periods. Error rates for each drug administration round were analyzed according to whether they were for the nurse's first, second, third (and so on) observed round. Error rates were calculated before and after the first intervention with nurses for whom an intervention was made. Observer reliability was calculated by comparing the rates of errors identified by the two observers.
There was no difference between the observation and nonobservation periods in the percentage of omitted doses for which a reason was documented, and there was no change in the error rate with repeated observations. There was no difference in error rates before and after the first intervention for each nurse. There was also no difference in error detection between the two observers and no change with increasing duration of observation.
Observation of nurses during drug administration at a U.K. hospital did not significantly affect the MAE rate; nor did tactful interventions by the observers. Observer reliability was high. Concerns about the validity and reliability of observational methods for identifying MAEs may be unfounded.
An estimated 1-2% of patients admitted to U.S. hospitals are harmed as a result of medication errors, and each error results in an additional $5000 in costs, excluding legal costs. Less is known about the impact of medication errors in other parts of the world, but research suggests that there is no reason to be complacent. Medication errors are of two main types, prescribing errors and medication administration errors (MAEs).
Although a range of methods have been used to study MAEs, the observation- based method developed 40 years ago by Barker and McConnell is generally accepted as the most reliable. In this approach, a researcher accompanies nurses preparing and administering drugs, records details of all doses administered, and compares this information with the doses prescribed. The earliest observational studies took place in the United States; such studies have also been carried out in the United Kingdom and elsewhere. However, some key issues of validity (are we measuring what we think we are measuring?) and reliability (are the measurements reproducible?) have not been adequately addressed.
A major concern about the validity of observational data is the potential effect of the research on the individuals observed. People may behave differently when they know they are being studied than at other times, a phenomenon sometimes referred to as the Hawthorne effect. In the case of MAE research, the error rate could increase if the researcher causes distractions or makes nurses nervous; conversely, if nurses are more careful in the researcher's presence, the MAE rate may decrease. In many situations covert observation, in which the people being studied are unaware of the observation, is considered more likely to capture what really happens. In MAE research, however, the required proximity of the researcher to the nurse makes covert observation impossible. There has also been considerable debate over the ethics of covert research. Therefore, most observational studies of MAEs have employed disguised-observation techniques; nurses have been aware of the observation but unaware of its true purpose. For example, nurses have been told that the observation is for a work study or a study of problems associated with the drug distribution system.
Even when all precautions have been taken to minimize the effects of observation, it is still possible that the presence of a researcher may affect nurses' behavior. Several attempts have been made to identify such an effect. In the United States, Barker et al. interviewed 28 nurses, each of whom had been observed for five days, and concluded that the observation was unlikely to have biased the results. In the same study, the researchers also examined the MAE rates for each observed day for each nurse; they hypothesized that if the observer affected nurses' practice, this effect was likely to be greatest at the beginning of the observation period. Although no significant differences in the mean daily error rate were identified, the analysis of variance used indicates that gradual changes in error rates over time may not have been identified. In the United Kingdom, Ridge calculated the MAE rate for each day in a seven-day observation period and found it to be slightly lower on the first day; the difference was not statistically significant. However, because of nursing-shift patterns, study days are unlikely to correspond to observation days for each nurse, and variables such as admission patterns may confound data analyzed according to day of the week. The questions about the effect of the observer are therefore only partially resolved.
A related issue is whether the observer should intervene, since intervention could have an additional effect on the observed error rate. In U.S. studies, the observer has generally not had access to the original medication order at the time of administration; he or she was therefore unaware of many errors as they occurred and was unable to prevent them. Indeed, it has been suggested that, to avoid liability, the researcher should actively avoid gaining familiarity with the original orders before observation.
However, in hospitals using the ward pharmacy system, such as those in the United Kingdom, the physician's original orders are kept with the patient and used to record administration; it is almost impossible for the researcher to avoid seeing the original medication order at the time of administration. Researchers are aware of errors as they occur and have an ethical dilemma regarding intervention; should they ignore the error to maximize the study's validity or should they intervene to protect the patient?
From a research point of view, intervention has three disadvantages. First, the nurse may not be given every chance to correct an error before the observer intervenes -- resulting in an artificially inflated error rate. Second, intervention could have an educational effect and prevent subsequent errors from occurring. Finally, unless carried out tactfully, intervention could introduce a judgmental dimension to the observation, resulting in distress to nurses and patients. However, the principles of duty-based ethics suggest that a health care professional has an obligation to intervene, and few would feel comfortable with the knowledge that they could have acted to prevent patient harm but did not do so. Many U.K. researchers have therefore intervened to prevent some or all errors from reaching the patient, but the effects of such interventions on the MAE rate are unknown.
A major disadvantage of observational research is that it is tiring, which could reduce observer reliability; observers may also process what they see or hear differently. In an early MAE study, Barker et al. attempted to assess interobserver reliability by asking two researchers to observe the same nurse. However, the two observers found it difficult to position themselves so that each could see the medication administered, and it was concluded that the assessment of interobserver reliability was impractical. No other researchers have attempted to address this issue, and little is known about observer reliability in this context.
The aims of this study were to explore the potential effects of observation on MAEs, to investigate the impact of the observers' interventions, and to assess observer reliability.
The validity and reliability of observational methods for studying medication administration errors (MAEs) were studied.
Between January and June 1998, two pharmacists observed consecutive drug administration rounds by nurses on two wards in a U.K. hospital and recorded all MAEs identified. The observers intervened in cases of potentially harmful errors. MAE records were audited to determine the percentage of omitted doses for which a corresponding reason was documented for the observation periods and for nonobservation periods. Error rates for each drug administration round were analyzed according to whether they were for the nurse's first, second, third (and so on) observed round. Error rates were calculated before and after the first intervention with nurses for whom an intervention was made. Observer reliability was calculated by comparing the rates of errors identified by the two observers.
There was no difference between the observation and nonobservation periods in the percentage of omitted doses for which a reason was documented, and there was no change in the error rate with repeated observations. There was no difference in error rates before and after the first intervention for each nurse. There was also no difference in error detection between the two observers and no change with increasing duration of observation.
Observation of nurses during drug administration at a U.K. hospital did not significantly affect the MAE rate; nor did tactful interventions by the observers. Observer reliability was high. Concerns about the validity and reliability of observational methods for identifying MAEs may be unfounded.
An estimated 1-2% of patients admitted to U.S. hospitals are harmed as a result of medication errors, and each error results in an additional $5000 in costs, excluding legal costs. Less is known about the impact of medication errors in other parts of the world, but research suggests that there is no reason to be complacent. Medication errors are of two main types, prescribing errors and medication administration errors (MAEs).
Although a range of methods have been used to study MAEs, the observation- based method developed 40 years ago by Barker and McConnell is generally accepted as the most reliable. In this approach, a researcher accompanies nurses preparing and administering drugs, records details of all doses administered, and compares this information with the doses prescribed. The earliest observational studies took place in the United States; such studies have also been carried out in the United Kingdom and elsewhere. However, some key issues of validity (are we measuring what we think we are measuring?) and reliability (are the measurements reproducible?) have not been adequately addressed.
A major concern about the validity of observational data is the potential effect of the research on the individuals observed. People may behave differently when they know they are being studied than at other times, a phenomenon sometimes referred to as the Hawthorne effect. In the case of MAE research, the error rate could increase if the researcher causes distractions or makes nurses nervous; conversely, if nurses are more careful in the researcher's presence, the MAE rate may decrease. In many situations covert observation, in which the people being studied are unaware of the observation, is considered more likely to capture what really happens. In MAE research, however, the required proximity of the researcher to the nurse makes covert observation impossible. There has also been considerable debate over the ethics of covert research. Therefore, most observational studies of MAEs have employed disguised-observation techniques; nurses have been aware of the observation but unaware of its true purpose. For example, nurses have been told that the observation is for a work study or a study of problems associated with the drug distribution system.
Even when all precautions have been taken to minimize the effects of observation, it is still possible that the presence of a researcher may affect nurses' behavior. Several attempts have been made to identify such an effect. In the United States, Barker et al. interviewed 28 nurses, each of whom had been observed for five days, and concluded that the observation was unlikely to have biased the results. In the same study, the researchers also examined the MAE rates for each observed day for each nurse; they hypothesized that if the observer affected nurses' practice, this effect was likely to be greatest at the beginning of the observation period. Although no significant differences in the mean daily error rate were identified, the analysis of variance used indicates that gradual changes in error rates over time may not have been identified. In the United Kingdom, Ridge calculated the MAE rate for each day in a seven-day observation period and found it to be slightly lower on the first day; the difference was not statistically significant. However, because of nursing-shift patterns, study days are unlikely to correspond to observation days for each nurse, and variables such as admission patterns may confound data analyzed according to day of the week. The questions about the effect of the observer are therefore only partially resolved.
A related issue is whether the observer should intervene, since intervention could have an additional effect on the observed error rate. In U.S. studies, the observer has generally not had access to the original medication order at the time of administration; he or she was therefore unaware of many errors as they occurred and was unable to prevent them. Indeed, it has been suggested that, to avoid liability, the researcher should actively avoid gaining familiarity with the original orders before observation.
However, in hospitals using the ward pharmacy system, such as those in the United Kingdom, the physician's original orders are kept with the patient and used to record administration; it is almost impossible for the researcher to avoid seeing the original medication order at the time of administration. Researchers are aware of errors as they occur and have an ethical dilemma regarding intervention; should they ignore the error to maximize the study's validity or should they intervene to protect the patient?
From a research point of view, intervention has three disadvantages. First, the nurse may not be given every chance to correct an error before the observer intervenes -- resulting in an artificially inflated error rate. Second, intervention could have an educational effect and prevent subsequent errors from occurring. Finally, unless carried out tactfully, intervention could introduce a judgmental dimension to the observation, resulting in distress to nurses and patients. However, the principles of duty-based ethics suggest that a health care professional has an obligation to intervene, and few would feel comfortable with the knowledge that they could have acted to prevent patient harm but did not do so. Many U.K. researchers have therefore intervened to prevent some or all errors from reaching the patient, but the effects of such interventions on the MAE rate are unknown.
A major disadvantage of observational research is that it is tiring, which could reduce observer reliability; observers may also process what they see or hear differently. In an early MAE study, Barker et al. attempted to assess interobserver reliability by asking two researchers to observe the same nurse. However, the two observers found it difficult to position themselves so that each could see the medication administered, and it was concluded that the assessment of interobserver reliability was impractical. No other researchers have attempted to address this issue, and little is known about observer reliability in this context.
The aims of this study were to explore the potential effects of observation on MAEs, to investigate the impact of the observers' interventions, and to assess observer reliability.
Source...