Dictionary learning: principles, algorithms, guarantee

Rémi Gribonval (INRIA Rennes)
Wednesday, May 18, 2016 - 10:00am
WIAS, Erhard-Schmidt-Saal, Mohrenstraße 39, 10117 Berlin

Sparse modeling has become highly popular in signal processing and machine learning, where many tasks can be expressed as under-determined linear inverse problems. Together with a growing family of low-dimensional signal models, sparse models expressed with signal dictionaries have given rise to a rich set of algorithmic principles combining provably good performance with bounded complexity. In practice, from denoising to inpainting and super-resolution, applications require choosing a “good” dictionary. This key step can be empirically addressed through data-driven principles known as dictionary learning. In this talk I will draw a panorama of dictionary learning for low-dimensional modeling. After reviewing the basic empirical principles of dictionary learning and related matrix factorizations such as PCA, K-means and NMF, we will discuss techniques to learn dictionaries with controlled computational efficiency, as well as a series of recent theoretical results establishing the statistical significance of learned dictionaries even in the presence of noise and outliers. (based on joint work with F. Bach, R. Jenatton, L. Le Magoarou, M. Kleinsteuber, and M. Seibert).