Speaker: Dr. Alexander Alemi
From: Google
Abstract
We are in the era of deep learning. The alchemy that is deep learning enables us to tractably search in the space of functions. Our challenge then is to provide meaningful objectives to guide that search. I will present a generalized framework for developing objectives. This approach requires two graphical models: one that respects the causal constraints of the world and one that is the generative model of your dreams. Minimizing the KL divergence between the two, powered by neural networks, can rederive a diverse set of successful objectives. In developing this approach, I've developed novel objectives for supervised learning, representation learning, unsupervised learning, bayesian inference and prediction. This variational approach provides a framework to guide future improvements to machine learning and its application to problems in other fields.
For more info, please follow this link.
Read More