On cross-validated lasso

Speaker(s): 
Denis Chetverikov (UCLA, USA)
Date: 
Wednesday, June 21, 2017 - 10:00am
Location: 
WIAS, Erhard-Schmidt-Saal, Mohrenstraße 39, 10117 Berlin

In this talk, we derive a rate of convergence of the Lasso estimator when the penalty parameter \lambda for the estimator is chosen using K-fold cross-validation; in particular, we show that in the model with the Gaussian noise and under fairly general assumptions on the candidate set of values of \lambda, the prediction norm of the estimation error of the cross-validated Lasso estimator is with high probability bounded from above up to a constant by (s log p/n)^{1/2} (log^{7/8}(pn)), where n is the sample size of available data, p is the number of covariates, and s is the number of non-zero coefficients in the model. Thus, the cross-validated Lasso estimator achieves the fastest possible rate of convergence up to a small logarithmic factor log^{7/8}(pn). In addition, we derive a sparsity bound for the cross-validated Lasso estimator; in particular, we show that under the same conditions as above, the number of non-zero coefficients of the estimator is with high probability bounded from above up to a constant by s log^5(pn). Finally, we show that our proof technique generates non-trivial bounds on the prediction norm of the estimation error of the cross-validated Lasso estimator even if the assumption of the Gaussian noise fails; in particular, the prediction norm of the estimation error is with high-probability bounded from above up to a constant by (s log^2(pn)/n)^{1/4} under mild regularity conditions.
Joint work with Zhipeng Liao, and Victor Chernozhukov.