Latent variable models provide data-efficient and interpretable descriptions of data. By specifying a generative model, it is possible to achieve a compact representation through exploiting dependency structures in the observed data. Their probabilistic structure allows the model to be integrated as a component in a larger system and facilitates tasks such as data-imputation and synthesis. Traditional approaches assume the data lies on a single low-dimensional manifold embedded in the high-dimensional space. However, in many scenarios, this assumption is too simplistic as more intricate dependency structures are present in the data and variations are not always common to all dimensions. This thesis presents a non-parametric Bayesian latent variable model capable of learning dependency structures across dimensions in a multivariate setting. The approach is based on flexible Gaussian process priors for the generative mappings and interchangeable Dirichlet process priors to learn the structure. The introduction of the Dirichlet process as a specific structural prior allows the model to circumvent issues associated with previous Gaussian process latent variable models. Inference is performed by deriving an efficient variational bound on the marginal log-likelihood of the model. The efficacy of the approach is demonstrated via analysis of discovered structure and superior quantitative performance on missing data imputation.
|Date of Award||17 Nov 2021|
|Supervisor||Neill Campbell (Supervisor) & Darren Cosker (Supervisor)|