NNT and person-year data: a journey

In 1999, I partnered with two residents to write a letter to the editor of the BMJ. The topic: why our calculated NNTs for the UKPDS 38 outcomes were different than the authors’ reported NNTs.

This letter to the editor was a wonderful teaching idea from a mentor of mine, Bob Kiser (@KODB on Twitter) at USNH Jacksonville, FL where I was junior faculty at the time. The UKPDS 38 authors had calculated their outcome data using life table analyses, Cox proportional hazard modeling, etc. – all perfectly reasonable, but had not really explained how they calculated their NNTs. To be fair, we certainly didn’t have a better idea in mind about how to calculate the NNTs, but asked for clarification. The authors responded in the journal, explaining their calculations, and I thought, “Well, that was fun.”

Apparently, there was more to the story…a GP researcher in the UK, Kev Hopayian, was concerned that this confusion was causing “false debates” about statistics and worried about how NNTs were being interpreted and used by clinicians. He wrote a response to the BMJ and published an article based on the issues raised in our letter! He did some informal data collecting that indicated there was real disagreement and confusion about this amongst GPs.

I was semi-blissfully unaware of all this until just this week. I was asked to help a colleague figure out an NNT from a longitudinal followup of the metformin arm of the Diabetes Prevention Program study. In researching the best answer – I knew this was still a topic of some disagreement – I found all this. Remember, this happened in the earlier internet – think MySpace rather than Twitter – so I didn’t have the luxury of auto alerts or a good MEDLINE interface. It’s fun to think that someone else took our relative naivete about EBM seriously enough to do their own work on it.

Back to the issue, though: if you’re dealing with person-year data for your outcomes, it seems to boil down to a few options:
1. Simply calculating an NNT based on the final tally of outcome vs. no outcome (if you can find this data). This can be wrong because of variable length of exposure to the intervention. For survival analyses, this method would be moot if you had long enough follow up to see everyone achieve the outcome.
2. Using the person-year outcome data to calculate a per-year NNT (this website is helpful), then dividing that by the length of follow up in the study. This can be wrong because of censoring of data as well as variable length of intervention exposure.
3. A more complicated approach uses the Hazard Ratio as an exponent for some of the calculation. This is mainly for time-to-event outcomes and statisticians seem to think this helps, but I would need to sit with this for a while to understand it.

It’s worth noting that approaches 1 and 2 give similar, though not equal, results. Closer than just the same order of magnitude, at least. And really, the point of NNTs is as a clinically-useful measure to figure out a ballpark magnitude of effect for a given intervention. Attempts an increasing precision fly in the face of this “clinical utility.” So maybe it doesn’t really matter…

I haven’t found a more definitive answer to this – so if anyone has one, I’d like to hear about it. Hit me up on Twitter! (@jwemd)