Machine Learning Coffee seminar: "Learning Markov Equivalence Classes of Directed Acyclic Graphs: an Objective Bayes Approach" Guido Consonni, Universita Cattolica del Sacro Cuore

2017-10-30 09:15:00 2017-10-30 10:00:00 Europe/Helsinki Machine Learning Coffee seminar: "Learning Markov Equivalence Classes of Directed Acyclic Graphs: an Objective Bayes Approach" Guido Consonni, Universita Cattolica del Sacro Cuore Weekly seminars held jointly by Aalto University and the University of Helsinki. http://old.cs.aalto.fi/en/midcom-permalink-1e7b2462225f27ab24611e7a257ab665fe0d3e9d3e9 Gustaf Hällströmin katu 2B, 02150, Helsinki

Weekly seminars held jointly by Aalto University and the University of Helsinki.

30.10.2017 / 09:15 - 10:00
seminar room Exactum D122, Gustaf Hällströmin katu 2B, 02150, Helsinki, FI

Helsinki region machine learning researchers will start our week by an exciting machine learning talk. The aim is to gather people from different fields of science with interest in machine learning. Porridge and coffee is served at 9:00 and the talk will begin at 9:15. The venue for this talk is seminar room Exactum D122, Kumpula.

Subscribe to the mailing list where seminar topics are announced beforehand.

Learning Markov Equivalence Classes of Directed Acyclic Graphs: an Objective Bayes Approach

Guido Consonni
Professor of Statistics, Universita Cattolica del Sacro Cuore

Abstract:

A Markov equivalence class contains all the Directed Acyclic Graphs (DAGs) encoding the same conditional independencies, and is represented by a Completed Partially Directed DAG (CPDAG), also named Essential Graph (EG). We approach the problem of model selection among noncausal sparse Gaussian DAGs by directly scoring EGs, using an objective Bayes method. Specifically, we construct objective priors for model selection based on the Fractional Bayes Factor, leading to a closed form expression for the marginal likelihood of an EG. Next we propose an MCMC strategy to explore the space of EGs, possibly accounting for sparsity constraints, and illustrate the performance of our method on simulation studies, as well as on a real dataset. Our method is fully Bayesian and thus provides a coherent quantification of inferential uncertainty, requires minimal prior specification, and shows to be competitive in learning the structure of the data-generating EG when compared to alternative state-of-the-art algorithms.

**

See the next talks at the seminar webpage.

Please spread the news and join us for our weekly habit of beginning the week by an interesting machine learning talk!

Welcome!