Abstracts
This page contains abstracts for talks from our speakers.
Tempered Gibbs sampling
Speaker:
Gareth Roberts (University of Warwick)
Abstract:
The talk will introduce the tempered Gibbs sampler, a Monte Carlo algorithm to sample from high-dimension probability distributions that combines Markov chain Monte Carlo (MCMC) and importance sampling.
Some theory will be described to support its use and explain why it has the ability to break down correlation in a robust and computationally efficient way. An application to Bayesian Variable Selection problems will be given where the algorithm is orders of magnitude more efficient than available alternative sampling schemes and allows fast and reliable fully Bayesian inference with tens of thousands regressors.
This is joint work with Giacomo Zanella (Bocconi).
Stein's method for computation with an intractable likelihood
Speaker:
Chris Oates (Newcastle University)
Abstract:
In this talk I will introduce Stein's method for the numerical approximation of Bayesian expected quantities of interest.
Then I will explain how techniques built on Stein's method can be used for variance reduction in Monte Carlo computation in the context of Bayesian inference with an intractable likelihood.
It will be argued that this is a rich area for methodological development.
Geometric adaptive Monte Carlo in random environment
Speaker:
Theodore Papamarkou (University of Glasgow)
Abstract:
Manifold Markov chain Monte Carlo algorithms have been introduced to sample more effectively from challenging target densities exhibiting multiple modes or strong correlations.
Such algorithms exploit the local geometry of the parameter space, thus enabling chains to achieve a faster convergence rate when measured in number of steps. However, acquiring local geometric information can often increase computational complexity per step to the extent that sampling from high-dimensional targets becomes inefficient in terms of total computational time.
This paper analyses the computational complexity of manifold Langevin Monte Carlo and proposes a geometric adaptive Monte Carlo sampler aimed at balancing the benefits of exploiting local geometry with computational cost to achieve a high effective sample size for a given computational cost.
The suggested sampler is a discrete-time stochastic process in random environment. The random environment allows to switch between local geometric and adaptive transition kernels with the help of a schedule. An exponential schedule is put forward that enables more frequent use of geometric information in early transient phases of the chain, while saving computational time in late stationary phases. The average complexity can be manually set depending on the need for geometric exploitation posed by the underlying model.
Enhanced sampling of high dimensional probability distributions using stochastic differential equations
Speaker:
Benedict Leimkuhler (University of Edinburgh)
Abstract:
Stochastic differential equations (SDEs) such as Langevin dynamics offer a versatile approach to sampling high dimensional systems.
In this talk I will describe several SDE systems and efficient numerical methods which can:
- provide high accuracy averages with respect to a target invariant distribution, or
- accelerate diffusion in metastable (multimodal) systems
Monte Carlo fusion: unifying distributed analysis
Speaker:
Murray Pollock (University of Warwick)
Abstract:
This talk outlines a new theory and methodology to tackle the problem of unifying distributed analysis and inferences on shared parameters from multiple sources, into a single coherent inference. This surprisingly challenging problem arises in many settings (for instance, expert elicitation, multi-view learning, distributed ‘big data’ problems etc.), but to-date Monte Carlo Fusion is the first general approach which avoids any form of approximation error in obtaining the unified inference.
In this paper we focus on the key theoretical underpinnings of this new methodology, and simple (direct) Monte Carlo interpretations of the theory. There is considerable scope to tailor this theory to particular application settings (such as the big data setting), construct efficient parallelised schemes, understand the approximation and computational efficiencies of other such unification paradigms, and explore new theoretical and methodological directions.
Some adaptive MCMC schemes for variable selection problems
Speaker:
Jim Griffin (University of Kent)
Abstract:
Data set with many variables (often, in the hundreds, thousands, or more) are routinely collected in many disciplines. This has lead to interest in variable selection in regression models with a large number of variables.
A standard Bayesian approach defines a prior on the model space and uses Markov chain Monte Carlo methods to sample the posterior. Unfortunately, the size of the space and the use of simple proposals in Metropolis-Hastings steps has lead to samplers that mix poorly over models.
In this talk, I will describe an adaptive Metropolis-Hastings scheme which adapts an independence proposal to the posterior distribution. This leads to substantial improvements in the mixing over standard algorithms in large data sets. The methods will be illustrated on simulated and real data with with thousands of possible variables.
Approximate Bayesian computation reveals the importance of repeated measurements for parameterising cell-based models of growing tissues
Speaker:
Jochen Kursawe (University of Manchester)
Abstract:
The growth and dynamics of epithelial tissues govern many morphogenetic processes in embryonic development. A recent quantitative transition in data acquisition, facilitated by advances in genetic and live-imaging techniques, is paving the way for new insights to these processes.
Computational models can help us understand and interpret observations, and then make predictions for future experiments that can distinguish between hypothesised mechanisms. Increasingly, cell-based modelling approaches such as vertex models are being used to help understand the mechanics underlying epithelial morphogenesis. These models typically seek to reproduce qualitative phenomena, such as cell sorting or tissue buckling. However, it remains unclear to what extent quantitative data can be used to constrain these models so that they can then be used to make quantitative, experimentally testable predictions.
To address this issue, we perform an in silico study to investigate whether vertex model parameters can be inferred from imaging data, and explore methods to quantify the uncertainty of such estimates. Our approach requires the use of summary statistics to estimate parameters. Here, we focus on summary statistics of cellular packing and of laser ablation experiments, as are commonly reported from imaging studies. We find that including data from repeated experiments is necessary to generate reliable parameter estimates that can facilitate quantitative model predictions.
Bayesian synthetic likelihood: a parametric alternative to standard ABC
Speaker:
Leah South (Queensland University of Technology)
Abstract:
Having the ability to work with complex models can be highly beneficial. However, complex models often have intractable likelihoods, so methods that involve evaluation of the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent.
Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a viable alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which uses a multivariate normal approximation of the distribution of a set of summary statistics.
In this talk, I will explore the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. BSL is accelerated by using a sparse estimation of the precision matrix. A novel, unbiased estimator of the SL for the case where the summary statistics have a multivariate normal distribution will also be explored. The findings will be illustrated through several applications, including a non-linear state space model and a cell biology model.
Adaptive tuning of Hamiltonian Monte Carlo Within Sequential Monte Carlo
Speaker:
Alexander Buchholz (ENSAE-CREST)
Abstract:
Since the works of Chopin, Del Moral, Doucet and Jasra, Sequential Monte Carlo (SMC) samplers have become a widely adapted tool for Bayesian inference. However, the quality of the generated approximation depends strongly on the choice of Markov chain Monte Carlo kernels used to rejuvenate particles.
In this paper we explore the tuning of Hamiltonian Monte Carlo kernels within SMC. We build upon the methodology developed by Fearnhead and Taylor (2013) and suggest alternative methods. We illustrate how adaptive tuning within SMC leads to Hamiltonian Monte Carlo kernels with performance comparable to state-of-art adaptive Hamiltonian samplers, such as the No-U-Turn Sampler of Hoffman & Gelman (2014). The advantages of using such kernels within an SMC sampler are presented on multi-modal target distributions and on the calculation of normalising constants, including a log Gaussian Cox process and a Bayesian binary regression.
Bayesian model comparison avoiding reversible jump
Speaker:
Felipe Medina Aguayo (University of Reading)
Abstract:
Making use of the full flexibility of the SMC samplers framework we introduce deterministic transformations to move particles effectively between target distributions of different dimensions. This approach, combined with adaptive methods, provides an extremely flexible and general algorithm for Bayesian model comparison that may suit cases where the acceptance rate in reversible jump MCMC is low. Applications and possible extensions to mixture models and the coalescent are presented.
Variational inference for stochastic differential equations
Speaker:
Dennis Prangle (Newcastle University)
Abstract:
Parameter inference for stochastic differential equations (SDEs) is challenging due to the presence of a latent diffusion process. Working with discretised SDEs, we use variational inference to jointly learn the parameters and latent states. That is, we introduce a flexible family of approximations to the posterior distribution and use optimisation to select the member closest to the true posterior. We introduce a recurrent neural network to approximate the posterior for the latent states conditional on the parameters. This neural network learns how to provide Gaussian state transitions which bridge between observations as the conditioned diffusion process does.
The talk will describe this method and illustrate it on a Lotka-Volterra population dynamics model and an epidemic model.
Bayesian static parameter estimation for partially observed diffusions using multi-level Monte Carlo
Speaker:
Kody Law (University of Manchester)
Abstract:
This talk will consider joint parameter and state estimation for partially observed SDE, which is known to be a challenging problem.
A popular class of methods for solving this problem are particle MCMC (pMCMC) methods, such as particle marginal Metropolis-Hastings. Such methods leverage a non-negative unbiased estimator of the un-normalised target arising from a particle filter within a pseudo-marginal MCMC algorithm in order to obtain an asymptotically exact algorithm without ever evaluating the un-normalised target exactly.
Here we assume furthermore that the SDE giving rise to the hidden process cannot be solved exactly, and must be approximated at finite resolution. It is well known that in such contexts the multi-level Monte Carlo (MLMC) method can be used to substantially reduce the cost to achieve a given level of error. The idea is to represent the target expectation as a telescopic sum of increments of increasing cost, and estimate the increments using targets which are coupled in such a way that the increments have decreasing variance. A schedule of decreasing sample numbers can then be carefully constructed based upon the relationship between the variance and the cost, resulting in a substantial speed up. In the context of interest here it is not clear how to construct an exact coupling, and we instead appeal to a carefully constructed approximate coupling of the pairs of particle filters. It will be shown how to construct a consistent estimator with optimal speed up via the approximate coupling.
A semi-complete data likelihood approach for intractable likelihoods
Speaker:
Ruth King (University of Edinburgh)
Abstract:
Intractable likelihoods arise in many situations where we cannot write down an explicit closed form for the likelihood function; alternatively the likelihood may be very computationally expensive to calculate. A common technique that can be applied in many situations is to use a Bayesian data augmentation approach where the parameter space is expanded via the specification of auxiliary variables.
This is done in such a way so that the “complete data likelihood” of the observed data and auxiliary variables given the parameters is straightforward and efficient to calculate. This is a very powerful technique, however, the associated (standard) MCMC algorithms can perform very poorly in these situations due to highly correlated parameters (and auxiliary variables).
We propose a “semi-complete data likelihood” approach, and discuss the associated improvement of performance using standard “vanilla” MCMC algorithms within this approach compared to standard Bayesian data augmentation techniques by implementing the approaches on real data examples.
Explicit non-reversible contour-hugging MCMC
Speaker:
Chris Sherlock (Lancaster University)
Abstract:
Both the Bouncy Particle Sampler (BPS) and Discrete Bouncy Particle Samplers (DBPSs) are non-reversible Markov chain Monte Carlo algorithms whose action can be visualised in terms of a particle moving with a fixed-magnitude velocity.
Both algorithm types include an occasional step where the particle 'bounces' off a hyperplane which is tangent to the gradient of the target density, making the BPS rejection-free and allowing the DBPS to make relatively large jumps whilst maintaining a high acceptance rate. Analogously to the concatenation of leapfrog steps in HMC, we describe an algorithm which omits the straight-line movement of the BPS and DBPS and, instead, at each iteration concatenates several discrete `bounces' to provide a proposal which is on almost the same target contour as the starting point, producing a large proposed move with a high acceptance probability. Combined with a separate (reversible) kernel designed for moving between contours, an explicit bouncing scheme which takes account of the local Hessian at each bounce point leads to an efficient, non-reversible MCMC algorithm.
Adaptive MCMC for everyone
Speaker:
Jeff Rosenthal (University of Toronto)
Abstract:
Adaptive MCMC attempts to automatically modify an MCMC algorithm while it runs, to improve its performance on the fly. However, such adaptation often destroys the ergodicity properties necessary for the algorithm to be valid. In this talk, we first illustrate adaptive MCMC algorithms using simple graphical Java applets.
We then present examples and theorems concerning their ergodicity and efficiency. We close with some recent ideas which make adaptive MCMC more widely applicable in broader contexts.