Monte Carlo Methods and Inference for Stochastic Processes
Chair: Rob Deardon (University of Guelph)
[PDF]
Chair: Rob Deardon (University of Guelph)
[PDF]
- CHRISTIANE LEMIEUX, University of Waterloo
A Study of Quasi-Monte Carlo Methods Via Dependence Concepts [PDF]
-
Quasi-Monte Carlo methods are multidimensional numerical integration methods that often provide more accurate estimators than the naive Monte Carlo method. The constructions underpinning these methods are designed to provide a form of structured sampling that can exploit certain characteristics of the integrand under study. These methods are typically studied using function decompositions, for example based on Fourier, Walsh or Haar series. In this talk, we propose an alternative way of studying these methods based on dependence concepts such as those introduced by Lehmann in 1966. This allows us to gain some new insight on these methods.
- JOURDAN GOLD, University of Guelph
An Investigation of Multi-stage MCMC Approaches for Sampling from Correlated, Discrete Target Distributions [PDF]
-
An investigation of the effectiveness of various MCMC algorithms for sampling from highly correlated, discrete target distributions will be discussed. The relative effectiveness of various multi-stage MCMC approaches, including algorithms based upon random-walk and independence samplers, as well as hybrid combinations of these, will be considered. Kullback-Liebler divergence and effective sample size will be used to assess algorithm quality in this comparison process.
- LOTFI KHRIBI, HEC Montréal
The Poisson Maximum Entropy Model for Homogeneous Poisson Processes [PDF]
-
We suggest a Bayesian model with the maximum entropy prior distribution to predict the number of future events for subjects already under observation. The intensity function used to model these events is the one corresponding to a homogeneous Poisson process with unknown parameter rates. The prior distribution for these unknown rates is obtained by maximizing the entropy subject to the condition that the first two moments equal the empirical ones. We find from a simulation study and from a warranty dataset from the automobile industry that the maximum entropy prior is preferable to the noninformative Jeffreys prior.
- LARISSA VALMY, Université des Antilles et de la Guyane
Statistical Inference for Point Processes Associated with a Dirichlet Process [PDF]
-
We consider events of a spatio-temporal point process which intensity is associated with a hidden process. Both of the following situations are studied. Firstly, the data consist of maps of new occurrence spatial positions between two consecutive observation dates. Secondly, we have count data in spatial units from a systematic random sampling. MCMC techniques are developped for statistical inference on a Cox process partially directed by a Dirichlet process.
- LIWEN ZOU, North Carolina State University
Fitting Nonstationary General-time-reversible Models to Obtain Edge-lengths and Frequencies for the Barry-Hartigan Model [PDF]
-
The Barry and Hartigan (BH) model is very flexible. Due to an identifiability problem, the parameters of the BH model cannot be expected to consistently estimate the actual pairwise frequencies. We define a nonstationary GTR (NSGTR) model for each edge and fit the NSGTR model by minimizing the distance between the estimates of transition probabilities under the NSGTR and BH models. With the best-fitting NSGTR estimates, the internal node frequency vector is interpretable as well as edge-length estimates that are otherwise not yielded by the BH model. These edge-lengths are interpretable as the expected number of substitutions along edges.
- ANNALIZA MCGILLIVRAY, McGill University
A Penalized Quasi-Likelihood Approach for Estimating the Number of States in a Hidden Markov Model [PDF]
-
In applications of hidden Markov models (HMMs), one may have no knowledge of the number of states (or order) of the model needed to represent the underlying process of the data. We present a penalized quasi-likelihood method for order estimation in HMMs which utilizes the fact that the marginal distribution of the HMM observations is a finite mixture. The method starts with a HMM with a large number of states and obtains a model of lower order by clustering and merging states through two penalty functions. Its performance is assessed theoretically, via simulation, and with the help of a real data application.