A Conversation with Jim Zidek

Vendredi, 12 mars, 2010

A CONVERSATION WITH

Jim Zidek

Editor's Note: Liaison is pleased to reinitiate the publication of interviews with some of our statistics stalwarts. Jim Zidek kindly accepted to provide our first interview, the actual interview was designed, conducted and edited by Nancy Heckman. They sat and chatted for quite a few hours in Vancouver during this past Spring. We are grateful to Nancy and Jim for making it all possible.
 

Jim Zidek - photo by Peter Macdonald
 

Jim Zidek
photo by Peter Macdonald
 

Photo of Jim Zidek.
 

Jim in his bantam league days in Red Deer, prior to playing for a semipro team in Didsbury, Alberta.
 

Photo of Jim Zidek.

In 1974, Jim served on the Local Arrangements Committee for the International Congress of Mathematicians, held at UBC.
 

Photo of Jim Zidek.
 

Jim Zidek, at the reception marking the end of his second term as head of the UBC Statistics Department.
 

Jim Zidek is currently Professor in the Department of Statistics at the University of British Columbia, a department he was instrumental in founding in 1983. His research ranges from his foundational work in decision theory and group Bayes analysis to his applied work in environmetrics. He is a Fellow of the Insitute of Mathematical Statistics and the American Statistical Association and a member of the International Statistical Institute. He has received numerous awards, including the Ninth Eugene Lukacs Symposium Distinguished Service Award, the Statistical Society of Canada's Gold Medal, the Distinguished Achievement Medal of the Environmental Statistics Section of the ASA, and the Izaak Walton Killam Research Prize. He has served the statistical community in many ways, including as NSERC Group Chair of the Mathematical Sciences, President of the Statistical Society of Canada, member of numerous editorial boards, and currently as Advisory Editor for CRC.
 

Jim was born in Acme, Alberta, and lived in many small towns throughout that province. His high school principal encouraged him to pursue mathematics and to attend university. He received his B.Sc. at the University of Alberta in 1961 and his M.Sc. in 1963. He then went on to Stanford, receiving his Ph.D. in 1967. Afterwards, he moved to UBC where he has remained, except for study leaves and a year in which UBC almost lost him to the fledgling University of Washington Department of Statistics.
 

When did you first discover an interest in statistics?
 

As an undergraduate at the University of Alberta. I had a big argument with my friend Glen. He described a confidence interval to me. I thought this was nonsense. I could not understand what he was on about, so we had this tremendous argument that stretched out and I decided to take statistics actually, because of that. I thought I needed to understand this stuff better. Anyway I didn't do very well, but it led me to think in the next year that I should take another statistics course. I found statistics a difficult subject.
 

Looking back, do you have any sense of what you were struggling with?
 

I was really struggling a lot with conceptual issues. For example, the instructor Ernest Keeping had a question on his exam where he said that a deck of cards was shuffled and one card was dealt face down on the table, and what was the probability it was an ace. By this time I had managed to learn the mantra that goes with confidence intervals, that you mustn't say that there is a probability point .95 that m lies between one and seven, say - it's either one or zero. It's just that you don't know which it is. So I took up this same mantra and applied it back again to this card, which is now face down on the table. There is nothing random about that card. I argued in my little essay that the probability was either zero or one. Of course I got zero on the question.
 

But you figured things out enough to continue studying, going on to Stanford. With whom did you work there?
 

With Charles Stein. I took a course that he gave on decision theory. In his classes, there were huge numbers of open questions, basically questions that revolved around his intuition, thoughts that he had and crude approximations that he put on the board that weren't rigorous in any sense. I was completely carried away by this subject, and actually by Charles himself. He was a very impressive individual in a whole variety of ways.
 

I also worked with Herman Rubin for a year. That was a totally different type of experience. Herman has an electrifyingly quick mind, a huge IQ, very quick to solve things. Charles has an entirely different sort of intelligence. You don't notice him quickly jumping to analytical results but he does things in a more intuitive heuristic sort of way. I learned a lot from Herman, a lot of technical stuff. Applications of special functions, asymptotics and so on. It was quite interesting to be working with the two of them at the same time.
 

What papers came out of your thesis work?
 

There was a paper on the extreme quantiles of the normal, which was an incomplete solution to a problem that Charles suggested [1]. In 1971 [2] I published a complete solution. It was quite an intriguing problem because Stein had shown that if you were estimating a variance for a normal distribution with an unknown mean then the usual estimator is inadmissible. And of course the admissibility of the sample mean had been known for a long time. Now the question about the quantiles took on a special significance, because a quantile is really just m plus a constant times s. So, when you take a linear combination of those two things, what happens - inadmissible or admissible?
 

As well, I had gotten into group theory and equivariance and all that, so that led to the Haar measure paper [3], which was something on the Bayesian side that paralleled something Jack Kiefer had done much earlier for the minimaxity results. He had shown in the ‘50s that something that was minimax within the class of equivariant estimators was minimax under certain conditions. I was basically showing the same thing was Bayes.
 

Then there is the work on a sufficient condition of admissibility [4], which was actually stimulated by Stein's work on the central limit theorem. In fact, it uses very much the same kind of an argument.
 

Who or what was influential in shaping your Bayesian statistical thinking?
 

I had my first leave at the University College, London in around 1970. I thought at that stage it might be interesting to go there and study some more, from a different perspective. Dennis Lindley was the head of the department at that time and Phil Dawid and Mervin Stone were there. Here's this frequentist wandering, quite innocently, into this lions' den and in there I found these three and others who helped me to see statistics in a different way. It was a tremendous experience.
 

It started off innocently enough by lunch hour conversations, somebody might ask me - why would anybody be interested in the mean squared error, for example. I thought that was a silly question. Of course, this is the average square error you would see if you had a large number of replications of your experiment. They would say - why would you want to calculate that? You are only doing the experiment once. I was usually tormented by these seemingly dumb questions but as time went on I began to realize that they were actually very sophisticated questions. By the time I left London, I was beginning to see a lot of the difficulties attached to the frequency theory and I was beginning to understand Bayesian theory.
 

Of course, everybody understands Bayesian theory at some level - that is, they understand there is a prior and we turn this crank and you get a posterior and so on. That is a trivial level of understanding, but there is a deeper level you need to get toward before you can begin to understand the subject of statistics. It's not that you would necessarily end up embracing the Bayesian paradigm by any means, but I think that is the level at which the discussion has to take place. What is probability? Does probability exist as a real thing in some sense? It was a lively year I must say at an intellectual level. It generated the work I did with Phil and Mervin that was eventually a read paper on the marginalization paradox [5]. We had a tremendous amount of fun with that work.
 

Do you consider yourself a Bayesian?
 

I am not absolutely sure what that word means. Certainly there would not appear to be, from my perspective, any other satisfactory foundation on which to try and build statistical theory - but having said that I think there are lots and lots of problems associated with trying to build on that foundation. The short answer I guess is, yes, I am a Bayesian in that sense. One thing I did learn early on is that the joint distribution where you include parameters as well as random variables does provide a powerful tool for thinking about problems. I found this to be a really useful tool even when I was thinking about consulting problems for example, in really complicated situations where there were zillions of things around. This provided a wonderful framework for trying to integrate my thinking process.
 

A Bayesian approach involves eliciting opinions. Group Bayesianity in some sense struggles with that. What is group Bayesianity about?
 

That is a very underdeveloped topic that I first worked on with Sam Weerahandi ([6], [7]). Of course it has quite a lot of importance because commonly nowadays groups of experts are convened to make joint decisions, and if you take a normative approach, you might view each of these experts as being Bayesian or being rational enough to behave approximately as if they were Bayesian. Then the question arises, how would they normatively arrive at a good decision. So at one level it is an operational question of considerable importance. I worked with Christian Genest [8] on aggregating prior probability distributions. We had a large number of derivations of different types, which pointed to this one way of doing it - namely taking a geometrical average of the priors.
 

But there is also a conceptual level that Sam and I considered, a level at which you might even imagine infinite populations of Bayesians - that is the part that I really got into this on. It's kind of a replacement for another conceptual device called the sample space in ordinary frequency theory. This does not exist either except in the mind of the person who is trying to assess a statistical procedure.
 

The basic idea is that you can imagine every one of this infinite population bringing a set of prior beliefs based on some experience of the world, and then it is up to you as to how you choose that conceptual group as your test environment. But each of them will have, roughly speaking, a set of hyper-parameters which characterize their view. You now look at the Bayes risk, which becomes a function of those hyperparameters, as being a kind of a summary of that individual's views, and now you can vary the hyperparameters across the range of these Bayesians that you have elected to use as your hypothetical experimental base and now you can check the risk function if you like, and you have what looks pretty much like a classical Wald theory, except that it is Bayes risks and not classical risks that we are talking about. The same mathematical theorems of course apply. That is the way that I got to looking at this conceptual foundation.
 

How did your work with Constance van Eeden come about?
 

I approached Constance van Eeden because of her work in order-restricted inference and because she had become a colleague of mine. When trying to bring Wald theory into the group Bayes context [9], I realized you get into order-restricted parameter spaces quite naturally. When you create this infinite population of Bayesians, it doesn't make sense always to think that the whole real line is representing all the relevant Bayesians. After all, we share a common experience of the world; you would guess that somehow our q's would somehow be related. The papers I have written with Constance look pretty much like the papers you would see in the Wald literature. But in my own thinking, they have certainly been as well about the group Bayesian problem.
 

One of your first applied works was on bridge design - was that a consulting project?
 

Yes. There was an interest in replacing the Lions Gate Bridge. Although it wasn't quite up to modern standards at the time, was it nonetheless sufficient to bear the traffic for which it was currently being used? I think that was my first really major project of an applied nature and it was very enlightening. It really taught me how superficially I had understood statistics - what do you mean by independence of two sections of bridge deck loads? I had a gloss that came straight out of probability theory, but at that time I really didn't appreciate things like conditional independence - conditioning on parameters made the data independent. But if you didn't condition on those parameters then these data were not independent. For those engineers, the parameters didn't exist in their own brains. So those deck sections were very closely related. From that project came a strong philosophical belief that consulting was quite essential to really coming to grips with the meaning of some statistical things that we think about all the time.
 

Apart from that, it was an extremely interesting project.
 

There were no standards for the construction of long span bridges like the Lions Gate Bridge. With a long span bridge it would be completely unrealistic to assume the heaviest possible loading as that would entail a huge cost of construction. So there was quite a lot of interesting theory there on extreme values that I got into [10]. We eventually published what I think was the first code for construction of long span bridges at the time - it was adopted by the American Society for Civil Engineers.
 

How did you become involved in environmetrics?
 

It started in the 1980's with a consulting project about the start-up of drilling in the Beaufort Sea and Harrison Bay. They needed a design to do before- and after-drilling measurements. In particular, the National Oceanographic and Atmospheric Agency was concerned that there might be some deleterious effect to the small organisms in the marine environment.
 

After this, Don Thompson, the President of SIMS (formerly the SIAM Institute of Mathematical Sciences, then the Societal Institute for Mathematical Sciences), approached me about getting involved in a US federal government program to assess the trends in acid deposition, which became a multi-centre study with Jim Ware and company at Harvard and Paul Switzer and company at Stanford. John Petkau eventually became a co-investigator with me at UBC. Out of the UBC group the one in Seattle was spun off as a separate centre.
 

There were a lot of gems that came out of that. In particular from my perspective was the work that has now gone on with Nhu Le in a long string of papers. The first one was with Bill Caselton on design [11], very much in the same spirit as the 1992 spatial prediction paper I wrote with Nhu Le [12]. Those two papers together then formed the basis of a lot of subsequent research. Questions keep coming up. For instance, a basic problem is that these monitoring sites start up at different times so you get the data coming to you in the form of a staircase. The lowest step corresponds to the newest station to start measuring these things, while the tallest step is the one from the oldest. You want some sort of efficient way of accommodating that staircase [13].
 

If at some point in the future you could look back on your life and say this was my greatest accomplishment, what would that be?
 

I couldn't really point to any one thing. I got into this business trying to understand statistics. I don't really consider that I have actually managed to do that, but I think I have gotten some way along, and that I consider quite an accomplishment. I think we all are in a collective grope, still trying to find the foundations and also the practical applications of our subject in other fields. They are all kind of tied together, so I think that is an accomplishment we all need to work towards in future.
 

References

  1. Zidek, J.V. "Inadmissibility of the best invariant estimator of extreme quantiles of the normal law under squared error loss." Annals of Mathematical Statististics 40: 1801-1808 (1969).
  2. Zidek, J.V. "Inadmissibility of a class of estimators of a normal quantile." Annals of Mathematical Statistics 42:1444-1477 (1971).
  3. Zidek, J.V. "A representation of Bayes invariant procedures in terms of Haar measure." Annals of the Institute of Statistical Mathematics 21: 291-308 (1969).
  4. Zidek, J.V. "Sufficient conditions for the inadmissibility under squared error loss of formal Bayes estimators." Annals of Mathematical Statististics 41: 446-456 (1970).
  5. Dawid, A.P., Stone, M. and Zidek, J.V. "Marginalization paradoxes in Bayesian and Structural inference." J. Royal Statistical Society Ser. B 35:189-233 (1973).
  6. Weerahandi, S. and Zidek, J.V. "Multi-Bayesian statistical decision theory." J. Royal Statistical Society Ser. A 144: 85-93 (1981).
  7. Weerahandi, S. and Zidek, J.V. "Elements of multi-Bayesian decision theory." Annals of Statistics 11: 1032-1046 (1983).
  8. Genest, C. and Zidek, J.V. "Combining probability distributions: a critique and an annotated bibliography." Statistical Sciences 1: 114-148 (1986).
  9. van Eeden, C. and Zidek, J.V. "Group Bayes estimation of the exponential mean: a retrospective view of the Wald theory." Proceedings Fifth Purdue International Symposium on Statistical Decision Theory and Related Topics (eds. S.S. Gupta and J.O. Berger), 35-49 (1994).
  10. Zidek, J.V., Navin, F.D.P., and Lockhart, R. "Statistics of extremes: an alternate method with application to bridge design codes." Technometrics 8: 185-191 (1979).
  11. Caselton, W.F. and Zidek, J.V. "Optimal monitoring network designs." Statistics and Probability Letters 2: 223-227 (1984).
  12. Le, N.D. and Zidek, J.V. "Interpolation with uncertain spatial covariances: a Bayesian alternative to Kriging." Journal Multivariate Analysis 43: 351-374 (1992).
  13. Le, N.D., Sun, L. and Zidek, J.V. "Spatial prediction and temporal backcasting for environmental fields having monotone data patterns." Canadian Journal of Statistics 29: 529-554 (2001).


About the Interviewer
 

photo of Nancy Heckman
 

Nancy Heckman is a Professor and a colleague of Jim's at UBC. She received her Ph.D. from the University of Michigan at Ann Arbor in 1982 and spent a few years in New York before joining the UBC Department of Statistics in 1984. Her research involves smoothing methods and functional data analysis.