Harvard Study Continues to Distort Healthcare Quality Debate
by Richard E. Anderson, MD, FACP
The Harvard Medical Practice Study is often cited in discussions of healthcare quality and medical malpractice reform. Missing from this discussion is any mention of the critical flaws of the study, which not only render the data of virtually no use for public policy debate, but which also fail to support the authors’ conclusions about the medical-legal system.
The study’s principal conclusions are: (1) Medical malpractice is common. (2) Relatively few injured patients actually sue. (3) One hundred and fifty thousand patients die annually because of their medical treatment. (4) There is no correlation whatsoever between medical negligence and the outcome of malpractice litigation. The Harvard authors deduced from this data that there is not too much malpractice litigation, but too little. Even though they admit the rate of meritless suits against physicians is too high, they still view it as preferable to reform the tort system.
Unfortunately, the Harvard imprimatur and publication in the New England Journal of Medicine have given the study an unwarranted credibility. The “findings” of the study have become a baseline, a kind of “conventional wisdom” in debates over healthcare quality, ERISA, and medical liability tort reform. “Consumers advocates” such as Consumers Union have cited the Harvard study to support their claim that doctors kill 80,000 patients a year, and the media consistently refer to the Harvard study findings as fact.
Predictably, this fundamentally flawed study has achieved near-cult status among plaintiffs’ lawyers.
The Harvard study was first published in 1991 and is based on a chart review of more than 30,000 hospitalizations in New York State in 1984. Screening personnel reviewed medical records searching for at least one of 18 criteria that would suggest an adverse event (an injury lasting more than 24 hours caused by medical management). Records that met any of the screening criteria were then referred to two physician reviewers who independently evaluated the cause of injury. If an adverse event was corroborated, it was then judged negligent if the care fell below the community standard.
Adverse events were found in 3.7% of the records, and 27% of these were judged to be due to negligence (1% overall). Marked variation was found among individual hospitals (adverse event rates ranging from 0.2% to 7.9%) and among medical specialties (from 1.5% for obstetrics to 16.1% for vascular surgery).
The study found approximately 180 deaths to be associated with adverse events. From this, the authors extrapolated that over 150,000 iatrogenic fatalities occur annually, more than half of which are due to negligence.
Fifty-one actual malpractice claims were found to have arisen from the records reviewed. The Harvard reviewers found no evidence of medical injury in 26 of those claims. In only nine did they find any evidence that medical malpractice had caused the injury. In fact, the authors found no relation between the presence or absence of medical negligence and the outcome of a claim. Indeed, there was no relation between the outcome of a claim and the presence or absence of an adverse event. The sole variable associated with outcome was degree of disability.
The study identified a total of 280 negligent adverse events. Eight of these resulted in claims, leading the authors to conclude that medical malpractice is far more common than medical malpractice lawsuits.
The study methodology failed to accomplish its primary task: the reliable identification of adverse events and negligent adverse events.
To identify an event adverse, the two physician reviewers merely had to make that judgment “more likely than not.” This minimal 51% standard was met only 10% of the time.
The remainder of the study’s adverse events were generated by a mechanical methodology that averaged the two physician scores to create a discrete, but obviously artificial data point. If this benchmark is not sufficiently dismal, the authors did even less well in identifying negligent adverse events.
It gets worse. The authors performed a duplicate review of a subsample of 318 charts using a second set of reviewers. The second team failed to identify the same group of adverse events as the first team, but they did find about the same incidence of adverse and negligent adverse events. On this basis, the authors declared their data reliable.
This is roughly equivalent to saying it does not matter whether we convict the innocent or the guilty as long as the overall number of convictions matches the crime rate.
Even granting the authors all their own assumptions, the data are simply not reliable, and should not be extrapolated to the real world of malpractice litigation. Moreover, there is no reason to grant the authors their own assumptions. The study lumped together adverse events both grave and minor, whether caused by doctors or simply occurring anywhere in a hospital. A slip and fall in a hospital corridor, for example, was grouped indistinguishably with surgical error and misdiagnosis.
Furthermore, the definition of an adverse event required only 24 hours of impact. The vast majority of adverse events identified and analyzed in the study are minor and never would have been, nor should they ever be, the subject of litigation. Since the study is based exclusively on an inpatient sample, by definition it deals only with the most seriously ill, where the interaction of therapy and disease will inevitably produce a significant fraction of adverse events.
Additionally, the physician reviewers used for the study were not chosen as specialists in the medical areas they reviewed. They would not have qualified as experts in most courtrooms.
Finally, the authors themselves found no correlation between their identification of adverse and negligent adverse events, and the outcome of malpractice litigation.
The Harvard study may be about many things, but medical malpractice is not among them.
The study sought to draw conclusions about the legal system by correlating malpractice claims to negligent adverse events. Comparing the Harvard authors’ flawed notion of negligent adverse events to medical malpractice cases is a facile “apples and oranges” comparison.
The extrapolations from very small numbers to national epidemics (180 becomes 150,000) is not only unfounded, it is reckless. Even though the authors themselves cautioned against “too quick a comparison” of the fatalities caused by malpractice in their data with any presumed national toll, this did not keep them from offering just such an extrapolation. Worse, they remained silent while opponents of tort reform such as Consumers Union claimed the Harvard study showed negligent doctors kill more people each year than firearms or automobile accidents.
The most remarkable findings of the study are rarely aired in public discourse.
It found the majority of malpractice suits do not even involve identifiable medical injury. Further, it found that neither adverse events nor medical negligence impact the outcome of malpractice litigation.
These conclusions are truly extraordinary.
To ignore them and state that the “real problem” is not too much malpractice litigation but too little, and that the admittedly high rate of meritless suits against physicians is preferable to reform of the tort system, is stunning.
High profile studies, like computer viruses, are difficult to eradicate. They appear in newspaper articles, self-serving press releases, and not surprisingly, in congressional testimony.
In October 1997, the National Coalition on Health Care released a Rand study of healthcare quality.
The 57-page document includes only a one line reference to the Harvard study. Yet the press release contained the line “180,000 people die each year as a result of medically induced injury or negligence.”
The story was carried nationally under headlines like “U.S. Healthcare Can Kill, Study Says.” Reuters said the Rand group “cited a separate Harvard University study that estimated 180,000 people died each year because of medically induced injury or negligence.”
The President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry included the 180,000 death figure in its draft report to the President in March. Although the erroneous figure was removed in the final report, it was too late to undo the damage to public perception. The draft had already been released to the press, including the AP wire service and USA Today.
Quick action by PIAA President Lawrence Smarr and The Doctors Company’s Washington representatives produced this retraction from USA Today: “The statistics do not appear in the panel’s final report after their accuracy was questioned by some medical groups.”
The Harvard study is too widely cited to be ignored, yet it is too flawed to be credible.
The continued misuse of this data to justify attacks on physicians and insurers, and to resist change in the legal system has contributed to a widening fissure in the foundation of our healthcare system—public faith in physicians and hospitals.
Cynical press reports of inflated fatality “statistics” add to the toll extracted by unfounded malpractice suits: unjustly accused physicians and the enormous societal costs of defensive medicine.
The healthcare community must respond critically to continued press references to the Harvard study and its projections.
If reporters, legislators, and activists are allowed to allege with impunity that hospitals and physicians kill hundreds of thousands annually “according to the Harvard study,” then outrageous extrapolations will remain the common denominator in healthcare reporting.
We must persist in educating policy makers, journalists, and the public in order to replace junk science with responsible discourse.
The allegation that doctors kill 80,000 patients a year is junk science.
Dr. Anderson is a medical oncologist, clinical professor of medicine at the University of California at San Diego, and chairman of the Board of Governors of The Doctors Company. He is the author of An Epidemic of Medical Malpractice? A Commentary of the Harvard Medical Practice Study, published by The Manhattan Institute.