Sparsity in Large Scale Optimization and Machine Learning
Dr. Yury Maximov
Los Alamos National Laboratory
Theoretical Division T-4 and CNLS
Abstract: In this talk, I cover our recent results on the complexity of convex optimization methods for solving high-dimensional machine learning and statistical estimation problems. The primary focus of this research is hidden problem sparsity that numerical algorithms could utilize to achieve linear dimension dependence often required for large-scale problems such as LASSO and PageRank. Further, we investigate sparsity of classification problems and its influence on the complexity and reliability of binary and multi-class classification methods. Finally, I present some motivating engineering applications along with the results of numerical experiments justifying the efficiency of the proposed algorithms and compare them with state-of-the-art algorithms and commercial solvers.
Location:Mathematical Sciences Building: 318