Bayesian sparsity using spike-and-slab priors

I recently received some queries on our paper: S. Mohamed, K. Heller and Z. Ghahramani. Bayesian and L1 Approaches for Sparse Unsupervised Learning. International Conference on Machine Learning (ICML), June 2012 [cite key="mohamed2012sparse"]. The questions were very good and I thought it would be useful to post these for future reference. The paper looked at Bayesian and optimisation approaches for learning sparse models. For Bayesian models, we advocated the use of spike-and-slab sparse models and specified an adapted latent Gaussian model with an additional set of discrete latent variables to specify when a latent dimension is sparse or not. This … Continue reading Bayesian sparsity using spike-and-slab priors

On Marginal Likelihoods and Widely Applicable BIC

The marginal likelihood of a model is one the key quantities appearing throughout machine learning and statistics, since it provides an intuitive and natural objective function for model selection and parameter estimation. I recently read a new paper by Sumio Watanabe on A Widely applicable Bayesian information criterion (WBIC)[cite key="watanabe2012widely"] (and to appear in JMLR soon) that  provides a new, theoretically grounded and easy to implement method of approximating the marginal likelihood, which I will briefly describe in this post. I'll summarise some of the important aspects of the marginal likelihood and then briefly describe the WBIC and some thoughts and questions on its use.

Continue reading "On Marginal Likelihoods and Widely Applicable BIC"