Speaker: Dr. Christian Kümmerle
From: University of North Carolina
Abstract
For a machine learning model’s ability to generalize well to unseen data, it has been understood that it needs to be able to capture a hidden, low-dimensional data distribution in the high-dimensional feature space. In this talk, we discuss several ML problems where the optimal model can only be identified if the desired parsimonious structure is explicitly promoted by the optimization or training algorithm, and present an suitable, scalable optimization framework based on graduated non-convexity and iteratively reweighted least squares (IRLS) that leads to state-of-the-art performance in outlier-robust regression and learning problems with sparse and/or low-rank parameter matrices. We show different convergence guarantees and rates for this IRLS framework, and furthermore present how scalable non-convex optimization methods can improve the reliability and capital efficiency of payment channel networks, such as Bitcoin’s Lightning Network.
This talk is based on joint work with various co-authors.
For more info, please follow this link.
Read More