Numbers!

Ah! I found the source of the not-a-number problem. Great success! It turns out that you cannot permit parameters to range too widely. In particular, one part of my function is an exponential, convolved with a Gaussian resolution. To normalise this, you calculate an integral over the range of the time variable, which in my case is -5 to 10 picoseconds. However, the range is expressed in lifetimes. The fit was trying to calculate this integral with the lifetime equal to 5 femtoseconds, meaning the lower bound comes out to roughly 1000 lifetimes. The exponential of 1000 is infinity, or rather, positive overflow. Multiply that by 0 and you get NaN, probably because the most common way of getting infinity is a division by zero. (Although this does seem to be a case where the computer science disagrees with the math convention. Div-by-zero ought either to crash or be NaN in the first place.) So, by limiting the permitted range of lifetimes to something much closer to the measured value of 411 femtoseconds, I can avoid this particular problem.

I’m not out of the woods yet, though, because there is a reason I thought I might safely leave the lifetime range so wide open: To wit, the fit should not have had to go anywhere near these limits! And indeed, it turns out its reasons for doing so are bad news. In particular, the likelihood – which the fit tries to maximise – as a function of that mixing parameter rejoicing in the designation ‘x’, has a minimum at x=0. Now this is Monte Carlo, that is, simulated events. So I know perfectly well what the correct answer is, to wit, x=0. (It’s not that I’ve made a sign error, either. I checked.) Well, the MINUIT algorithm hates maxima. (For historical reasons, MINUIT takes the negative logarithm of the likelihood, and minimises that; which is equivalent to maximising the likelihood. So my likelihood minimum appears to MINUIT as a maximum.) So it pushes x well away from zero, to get into a region where the second derivative is positive and it can do its thing. (Of course this is very wrong, but the poor fit doesn’t know that.) But with x=-0.2, things get very strange. (The physical value known to be less than one percent.) So now other variables – including the lifetime – start to have negative second derivatives as well – the fit pushes them far away from their roughly-correct starting values – the integral calculation tries to take the exponential of 1000 lifetimes – and boom, two weeks of debugging.

Still, this is great progress. I am now in the situation of merely having a very bad fit to the data. This is highly preferable to having an actual bug! There are any number of reasons why the fit might go bad in this way. Perhaps I’ve got the efficiency description wrong. Perhaps the efficiency depends on the decay time, which my function assumes it doesn’t. Perhaps my Dalitz-plot model is wrong – in fact this is my best guess at the moment, because the model I’ve got is based on a study made by a fellow UC grad student (now gone on to pastures postdoctoral) a year ago, and the simulation program might well be using something older, or perhaps something K-matrix based as opposed to my Breit-Wigner description. Or both. Opportunities for merely bad fits abound! There are an infinite number of ways of describing four-dimensional data badly, and I have clearly hit on one of them. But I know how to search for a good one; searching for inexplicable NaNs was much more frustrating.

Advertisements

Leave a comment

Filed under Academics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s