2016-Adaption and Approximation in Markov Chain Monte Carlo
Adaption and Approximation in Markov Chain Monte Carlo
Organizer and Chair: Aaron Smith (University of Ottawa)
[PDF]
Organizer and Chair: Aaron Smith (University of Ottawa)
[PDF]
- MYLÈNE BÉDARD, Université de Montréal
Hierarchical Models: Local Proposal Variances for the RWM-within-Gibbs [PDF]
- We study the performance of RWM-within-Gibbs algorithms for sampling from hierarchical models. Using existing scaling analyses, we develop asymptotically optimal tunings for that sampler. This leads to locally optimal proposal variances that depend on the mixing components of the hierarchical model and that correspond to the classical asymptotically optimal acceptance rate of 0.234. Ignoring the local character of the optimal scaling leads to an optimal proposal variance that remains fixed for the duration of the algorithm; the corresponding asymptotically optimal acceptance rate is then lower than 0.234. We provide results for location and scale hierarchies, and illustrate the findings through numerical studies. We compare these local and constant approaches to RWM with diagonal covariance matrix and Adaptive Metropolis samplers.
- DANIEL JERISON, Cornell University
Honest MCMC Convergence Guarantees [PDF]
- Is MCMC estimation as trustworthy as sampling directly from the target probability distribution (if that were feasible)? Usually not: Monte Carlo standard errors, which purport to measure the uncertainty introduced by the Markov chain, are asymptotically valid but provide no finite-time guarantees. The few nonasymptotic results are difficult to apply in practice. I will discuss new MCMC estimation theorems for Markov chains with a regenerative structure. These theorems give accuracy guarantees of the same type that sampling directly from the target distribution would provide. I will illustrate the results using a Gibbs sampler for a Bayesian hierarchical model.
- MATTI VIHOLA, University of Jyväskylä
Unbiased Estimators and Multilevel Monte Carlo [PDF]
- Multilevel Monte Carlo (MLMC) and recently proposed debiasing schemes are closely related methods which can be applied in scenarios where exact simulation methods are difficult to implement, but biased estimators are easily available. An important example of such a scenario is the inference with continuous-time diffusion processes, where the process is difficult to simulate exactly but time-discretized approximations are available. I will present a new general class of unbiased estimators which admits earlier debiasing schemes as special cases, and new lower variance estimators which behave asymptotically like MLMC, both in terms of variance and cost, under general conditions. This suggests that bias can often be eliminated entirely with arbitrarily small extra cost. (arXiv:1512.01022)