We seem to be accumulating a large tongue-in-cheek evidence base (mostly thanks to the BMJ Christmas issue) around the effectiveness of parachutes in preventing injury upon falling out of an airplane.
As far as I know, it started with a legend around “The Army Parachute Study” (naturally, I learned about this when I was in the Navy). A N-of-2 RCT of parachute use for jumping from a perfectly good airplane at 10,000 ft. When the predictable (but imaginary) result data were analyzed by Chi-square testing – using the appropriate corrections for small cell size – there was no statistically significant difference between the parachute group and the no-parachute group. The lesson here was, naturally, about the importance of sample size determination to avoid a type 2 error (inappropriately accepting the null hypothesis).
Then we had the systematic review of parachute use studies. Finding no evidence from their literature search to support parachute use, the authors could not recommend it. Lessons here: 1) the absence of proof is not proof of absence and 2) recommendations in guidelines should clearly distinguish between insufficient evidence and a recommendation against an intervention.
Those lessons were about study design and validity. Now, we have a lesson about applicability/generalizability of studies. It’s critical to examine the study using a “real-world” lens. As EBM has gained traction, most studies’ validity is pretty good (citation needed). The main issues now focus on ensuring that the study examines the appropriate patients, has meaningful comparators and is done in clinically realistic circumstances. Physicians must critically evaluate the inclusion and exclusion criteria, and think about whether their patients would generally meet those criteria. Do you use the control intervention as the study did, or did that comparator dose seem a little low? In this Christmas issue study, the reader must ask – what sort of fall was used to test the parachute intervention?
However, the trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice.Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial
“Cautious extrapolation” indeed!
- Other parachute references:
- A questionably serious attempt to prove that parachutes do work. The investigators used rag dolls, since “in accordance to the Declaration of Helsinki, the participation of human beings in this trial was impossible”. They analyzed the rag doll injuries and concluded that parachutes were likely effective in humans.
- An interesting article taking exception to the use of parachute use as an analogy for medical interventions. The authors looked at articles that cited the BMJ 2003 parachute systematic review, and analyzed how the authors of those citing studies used the parachute analogy. They were not impressed, stating “Most parachute analogies in medicine are inappropriate, incorrect or misused.”
I’ve used both the Army Parachute Study and the 2003 systematic review in my teaching. With these additions, I think I could fill an entire EBM talk with parachute studies. I wonder whether that’s a good thing.