• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Animal & Veterinary

  • Print
  • Share
  • E-mail

Mathematical Validity of CVM Risk Assessment

Dr. Tony Cox

DR. COX: Thank you. I am pleased and surprised to discover that I am the lunchtime speaker.

(Laughter.)

(Slide.)

Whenever you see a model with several dozen input parameters, you are entitled to wonder does the whole thing hand together; do the outputs fall from the inputs; is this thing valid. And I guess I could get us to lunch pretty quickly by saying yes and stepping down.

I thought I should give a little bit more detail. But I will move quickly. To say has the model been validated or to address the mathematical validity of the model is going to come down to two things: Is it sound meaning that the calculations are correct? And is it -- given its assumptions. And is it useful, meaning that the assumptions are ones that we can live with?

And you will notice that the big assumption is that the incidence of bad outcomes that we don't want is proportional to the volume of outgoing chicken. I mean, K is the key assumption. And then there are a lot of little assumptions.

And so I want to spend the next few minutes, fewer than ten, fewer than eight, the next few minutes just looking at the key assumptions and then saying why I think that this is a pretty good approach. It is a pretty sensible study. It does hang together.

It has to make a few baroque assumptions to get across big data gaps. But it is very explicit about that. So all in all, I think it is a job well done. I want to invite you to critically examine a few assumptions and see if you share that conclusion.

The strength of the model is its listing of all the parameters, most of the assumptions and the key uncertainties about those things. So that anyone of us can reproduce at least the calculations. That, of course, is attractive.

(Slide.)

Among the explicitly listed assumptions are things like attribution of fluoroquinolone resistance to chicken, stability of risk estimates over time and across populations, assumptions about care-seeking behavior. Of course, these are areas where there is a lot of uncertainty. There is probably a lot of variability.

But the narrow validation question is due to conclusions follows the premises, do the assumptions correctly propagate through to give risk values, within that narrow context, we can make any sort of assumptions we want and just say, well, is the calculation accurate. And the calculation should be pretty accurate. I will come back to that to suggest how we can quantify the accuracy.

But it is also I think fair to say a model is more than a set of assumptions and a set of conclusions. What it is a way of calculating outputs, calculating conclusions from inputs. So if you don't like the assumptions, change them. I mean, that is why it is a model instead of just a statement of what someone believes to be true.

But in addition to the explicit assumptions which I think are well handled, there are some implicit assumptions. By the way, I think those are pretty appropriately handled, too. But I want to pull some of those out.

(Slide.)

And in the interest of hunger, I am going to focus on just the ones of these that are most interesting. Those are independent. One assumption made throughout is that we can take a lot of input parameters and treat them as if they are statistically independent.

So I want to say a few words about that. I think extrapolation between populations we are going to pretty much skip over. It is obviously important. There is always room for refinement. But I think that beyond saying those things, there is a bunch of technical details. Right now, the truth is we don't know how well the FoodNet population represents the larger U.S. population.

I myself having grown up in Virginia think, you know, people who live in the south eat more chicken. So to what extent is the geographic balance there? The answer we don't know. So let's acknowledge that uncertainty and say it is worth looking at in more detail eventually and move on. You are using a simple ratio which is probably an appropriate starting place.

Similarly, for folks who are interested in modeling, there is a lot of interesting stuff to be said about aggregation of end sequences. Something that I see as a very strong part of this model is the calculation of one big probability by careful examination and eduction of data from a whole bunch of little probabilities that multiply into it.

We could talk, and it would be fun if you are interested in modeling, about, well, do you do better by estimating the whole, big probability. I am talking here about the product of what's the likelihood that you get sick, that you go to see a doctor, that he prescribes a drug, that your tests are positive and so forth.

There is a statistical issue which is do you better by trying to model the product of all those things or by trying to model each piece and then multiplying them together or by doing both and realizing that you need to get the same answer whichever way you do it. And those are interesting technical details.

You might be able to slightly reduce your uncertainty about the results if you exploit the fact that there is more than one way to calculate the same answer. There. Now, that is a little abstract of stuff that we could talk about under aggregation of event sequences. But I plan to skip it because I don't think it makes much difference in this analysis.

And, finally, I will say a little bit about modeling of input uncertainties and suggest some things that might be done to further boost the comfort in this model which I think starts pretty high. Okay.

(Slide.)

So the independent assumption I do think is worth noting. And the question here is should input be modeled as statistically independent which is how they are modeled right now. For example, the probabilities of care-seeking behavior among those with bloody and non-bloody diarrhea, are both models -- each separately as being models drawn from some appropriate gamma distribution.

My question would be if you learned that one of those is much higher than expected, suddenly people are all hypochondriacs and they are rushing to the doctors, you know, immediately, might that affect your beliefs about the other of these two parameters. Is it only people with bloody diarrhea who are hypochondriacs? I mean, I wouldn't blame them.

(Laughter.)

But if it is a social phenomenon, being surprised on one might indicate that you might be surprised on the other. So all the formulas in the model can be generalized immediately by conditioning each component of the product, all the things that have preceded it.

And I will simply note that that is one area for exploration which we could look more carefully at possible dependencies among inputs. The expected impact of that generalization is small provided that independence is a reasonable approximation. And now suddenly I am talking about the real world, what is going on physically. Is this a reasonable approximation? And I don't know the answer to that.

So I will say mathematically it would make sense to allow for the study of dependencies among inputs. I am inclined to think that it wouldn't change the answer a whole lot. But I don't know that for a fact.

(Slide.)

Okay. Extrapolation, I promised you I would skip over this. So I will. Aggregation of events, I already spent more time introducing it than I had intended to spend talking about. So I am going to skip past that.

(Slide.)

Modeling input uncertainties, the middle point that uncertainty is about joint distributions and dependence among uncertainties to be analyzed further, I would make that the recommendation.

If it turns out that the community generally for political or other reasons wants to push on this analysis, this initial analysis and say we have got to be more comfortable before accepting the calculation of outputs that comes from inputs, then I think that making these I suspect minor refinements would be worthwhile.

In the same vein, there are a number of technical options for estimating joint distributions of inputs including the Bayesian approach that David has taken and including the frequentist approach which looks an awful like it.

There are other approaches that could be explored. And if one wanted to push hard on building comfort in the input-output calculator, I would recommend looking at some additional technical approaches.

Again, probably the details aren't that important. But I will be delighted to share them with you after lunch.

(Slide.)

Okay. Model formula uncertainty is one of the biggest problems in most models with a few dozen input parameters, is that you are not only uncertain about the inputs that go into this thing, but you are very uncertain about the formulas for combining them.

An admirable attempt has been made in this piece of work to make all the formulas just logical identities. There is supposed to be no empirical dose response relations or anything that might be complicated.

Despite that fact, David said I might mention -- and, in fact, I am going to mention the fact that whenever you have even a ratio of uncertain quantities, you are to be quite careful of the ratio of means is not the mean of the ratios.

There may be biases, although they should be small, that arise from uncertainties about formulas and from the fact that there may be multiple numerators, multiple denominators that are getting munged together, munged being a technical term. The less technical term is mixture distribution.

In any case, there may be some slight biases there. I don't think they would invalidate the main conclusions of the model.

Okay. Now, let me wind up. There is a concept at the very end of this slide, simulation calibration. And let me share that with you because this is a recommendation for something else that should be done.

Oh, one more major point. All statistics and all mathematics aside, I hope that many of you notice that the spider diagrams show a range of uncertainty that is pretty darn small, typically a factor of two on the Y axis. Those of you who have been involved in other risk assessments might be used to a factor of 106 on the Y axis.

So from a certain standpoint, the sensitivity analyses to me build a lot of confidence in the range of results we are going to get out. And all this probablistic tweaking is a small refinement inside a really narrow range by risk analysis standards.

So here is the thing that I think would be a good idea and that I would urge for consideration as a possible extension of this work and not necessarily a very difficult one. If we take the whole model, it is a big calculator. Let's look at it as a black box right now. And we want to know, well, how biased, if at all, are the outputs that it gives, how trustworthy are the outputs that it gives.

One option for doing that is to drive this model, exercise it, using a front-end simulator that says, look, we are going to make up a -- an expected nominal number of cases. We are going to make up a true value.

Then we will simulate what random sampling from a large population might yield given that true value. Are you with me so far? We are going to simulate what is going on. We are going to simulate the sampling process.

Then, by gosh, we take that simulated data from the sampling process and run it exactly through the model just the way the model is right now. The model is a big black box. You put stuff in, you can get stuff out.

What you get out is the estimate of the true but unknown quantity. But wait a minute. The quantity is known in the simulation context. You start knowing the right answer. You drive it through the process. You see what the model says, compare it to the right answer which you knew going in. I recommend that that be done.

I expect that the calibration curve will look like a 45-degree line meaning -- or will be close to a 45-degree line. I would be surprised if it were spot on. But my point here is that we don't have to conjecture about whether the logic of the model is so well developed that we are sure we are going to get the right answer.

We can find out being much stupider about it, not trying to reason our way through it. Just say, well, here is the right answer, sample from it, exercise the model, do we recover the right answer. So I would recommend that. I love the sensitivity analyses. We could do more to things like sensitivity to population, heterogeneity.

Since I am a mathematician, I have no problem saying things like, well, if one person ate all the chicken that was produced, that would limit the number of cases you would see. All right?

(Laughter.)

In conclusion, model structuring calculations are well documented and logical. I think the model has good face validity. The model-based risk projections are credible in the sense that the logic isn't unsound given the assumptions, the conclusions I expect do follow.

Uncertainties in input quantities are explicitly and I think by and large appropriately modeled, although one can quibble about technical details. I recommend doing the calibration exercise that I have just mentioned. Thank you.

(Applause.)

DR. BEAULIEU: Thank you, Dr. Cox. We apologize for cramping your style which is considerably in any event. In the interest of appetites, we are going to --

DR. COX: Is it chicken for lunch?

(Laughter.)

DR. BEAULIEU: That's up to those folks out there having heard this morning's presentation. Are there, in fact, any questions from the mathematicians in the audience for Dr. Cox? One. David Vose.

(Away from microphone.)

DR. VOSE: Yes, one question. I would just say I think it is a great idea doing that calibration ---.

DR. BEAULIEU: Thanks, David. I have done a terrible job of keeping us on time this morning as you have noticed. I would try to get everybody back in here by 1:30. At that point, folks are going to be up here talking I would anticipate. So try to be back here by 1:30.

DR. SUNDLOF: I have one other announcement. I said earlier this morning that we would be out of here by 5:30 sharp. Since the last two presentations, I have done an uncertainty. And with 95 percent confidence now, we will finish somewhere between 5:00 and 6:00. Okay.

(Whereupon, a luncheon recess was taken.)