Skip to main content
ruler
Liaison Newsletter

Rick Danielson Jr. (rickedanielson@gmail.com), Halifax, Nova Scotia, Canada

A statistician and physicist were discussing an experiment over a pot of tea, when this hypothetical exchange occurred: “I know we are working in the right units, but do you think that some of our measurements are nonlinear?” Puzzled, the physicist took a sip before responding, “Are you saying that individual measurements could be nonlinear? Is that a thing?” The statistician also paused to take a sip, “Well, Kruskal (1988) wrote about Mahalanobis (1947) using rulers with nonlinear increments. They were worried about people making correlated errors, but I guess that doesn’t apply to our instruments.” “No,” the physicist agreed, “not if we’re talking about repeated errors,” and the statistician added, “We do check our instruments.” Then the two finished each other’s thoughts again, “But if we were completely familiar with this experiment …” “ … we wouldn’t be doing it!”

One purpose of this hypothetical exchange is to acknowledge that nonlinear measurements are unfamiliar. However, this exchange also illustrates a kind of model hierarchy. First, there is confirmation that a new idea exists (“is that a thing?”), then a decision about whether the idea is relevant (“No.”), and finally, an allowance of the unknown. In a sense, measures and models of the experiment are provided, respectively, by the statistician and physicist in different parts of the exchange. Together, they infer what actions to take, if any. Perhaps the only notable thing is that this happens in just a few words.

Our other purpose is to question whether an admittedly ill-defined notion of nonlinear measurement might fit within a statistical model hierarchy. In addressing a wide audience, Salsburg (2017) emphasizes that the central tenant of statistical inquiry is that measurements = truth + error. In other words, measurements are linearly related to truth. Does this imply that our ruler has linear increments, or that we are using the most appropriate unit already? Instead, measurements that are taken to be equal to linear + nonlinear + unassociated, would seem to confound the central tenant, at least if truth = linear and error = unassociated. This is because the nonlinear part fits in neither category. While that might sound critical, what about our hypothetical exchange, which results in an agreement to adapt the experiment, but no action based on what is unknown or unfamiliar?

Perhaps we need to consider the central tenant as part of a hierarchy of models of varying complexity and familiarity. Such an idea is not exactly novel in biology and geophysics (Held 2005). Our hypothetical exchange can be offered as another accessible analogy. In other words, the decision to act is based the central tenant (a truth + error model), but the conversation certainly doesn’t end there. And nor should it end here. For instance, Mahalanobis (1947) predicts that as measurements become more precise, systematic bias should be easier to detect. It would be interesting to see if this prediction has held over the years since it was made, with ongoing advances in metrology, climate measurements, and their units (Feistel et al. 2016).

References

Feistel, R., R. Wielgosz, S. A. Bell, M. F. Camões, J. R. Cooper, P. Dexter, A. G. Dickson, P. Fisicaro, A. H. Harvey, M. Heinonen, O. Hellmuth, H.-J. Kretzschmar, J. W. Lovell-Smith, T. J. McDougall, R. Pawlowicz, P. Ridout, S. Seitz, P. Spitzer, D. Stoica, and H. Wolf, 2016. "Metrological Challenges for Measurements of Key Climatological Observables: Oceanic Salinity and pH, and Atmospheric Humidity. Part 1: Overview.” Metrologia, 53: R1–R11, doi:10.1088/0026–1394/53/1/R1.

Held, I. M. 2005. “The Gap Between Simulation and Understanding in Climate Modeling.” Bulletin of the American Meteorological Society 86, 1609–1614.

Kruskal, W. 1988. “Miracles and Statistics: The Casual Assumption of Independence.” Journal of American Statistical Association 83, 929–940.

Mahalanobis, P. C. 1947. “Summary of a Lecture on the Combination of Data from Tests Conducted at Different Laboratories (Reported by J. Tucker Jr.). American Society

for Testing Materials Bulletin 144, 64–66.

Salsburg, D. S. 2017. Errors, Blunders, and Lies: How to Tell the Difference. CRC Press, Boca Raton, Florida, 154 pp.

This article was co-published with COSN and the CMOS Bulletin.

 

No articles found.

180 of 302