[quote]Gambit_Lost wrote:
DrSkeptix wrote:
Bill Roberts wrote:
DrSkeptix wrote:
The origin for these inflated “estimates” of unnecessary deaths is a notoriously inacccurate NEJM article of the early 1990’s, in which epidemiologic estimate for a population risk was applied to the population as a whole. In short, these deaths did not happen. (Compare all this to a similar poor estimate on Iraqi deaths in the Lancet.)
Further, the estimates include “errors”–whether grave or not–in patients who die of their disease or irremediable problems. (Dana Carvey had a grave error, but did not die. Many patients have minor mistakes which do not contribute to a lethal diagnosis.)
I do not condone errors, but perfection does not exist anywhere. The IOM seems to have compounded its own error through exaggeration.
Good point. I have heard and read similar numbers before – perhaps ultimately coming from the same source, I don’t know – and the claims always seemed bizarre to me.
Not that I know it can’t be right, but it doesn’t seem plausibly right.
If you are interested, here is a critique of methodologic errors in the IOM report, and in the influential Harvard claptrap 0f 1991 (see ref. 6)
Of course, this is off topic, and does not detract in the least from snipeout’s point.
Perhaps I, too, would prefer my chances with the police rather than with doctors. I have often said that with a shortage of prisons, we should sentence criminals to medical care…but that would probably constitute cruel and unusual punishment.
I’m interested in learning more about this. I ran a quick google scholar search and it seems most of the articles I found referenced the 1999 IOM report you’re speaking of. However, most of them agreed with the numbers and some even suggested they may be low. Your article makes a good critique of the stats, however.
Do you happen to know of any other, similar studies that you can quickly reference here?
Cheers
[/quote]
OK. Here is a link to the IOM whitepaper, but let’s start with Methodology:
You will notice that it is a grandiose accumulation of nonsense, a survey vetted by 19 “experts.” In other words, this isn’t good social science; it even isn’t–you should excuse the expression–good epidemiology.
Those citing this article, I propose, do so for political reasons.
I cannot download the 1991 Harvard paper (HPM). It has been roundly criticized, but thoughtful criticism may not show up in Google searches (not popular enough).
If I were to devise a model for estimating unnecessary deaths due to medical errors I would look to one high-risk specialty: anesthesia. The estimate of deaths due to anesthesia in Classes 1-4 (patients not expected to die) is in the neighborhood of 1/10,000. This includes deaths not due to errors.
The HPM paper estimated a 3.7% risk of errors leading to a disabling condition–not death. In many studies, the chance that any error leads to or contributes to death is estimated at 7%. So…are we to believe that, with this range of deaths, the HPM study would have reported (as a lower boundary) 3 to 5 deaths in their sample of 30,000? But none were reported (in the abstract).
Not only are the estimates of 90,000 unnecessary deaths implausible–the cemeteries would be overflowing, and the courts would be packed with suits–but the HPM researches could not even agree on the pre-study definitions of “error.” (To my memory, there was no concordance study to verify internal agreement.)
As in computers, so in social or medical policy: garbage in, garbage out.
