The curse of dimension in nonparametric regression

Speaker(s): 
Samory Kpotufe (MPI Tübingen)
Date: 
Wednesday, November 23, 2011 - 10:00am
Location: 
Weierstrass-Institute, Room 406

Nonparametric regression consists of learning an arbitrary mapping f: \X to \Y from a data set of (X,Y) pairs in which the Y values are corrupted by noise of mean zero. This statistical task is known to be subject to a so-called "curse of dimension": if \X is a subset of R^D, and if the only smoothness assumption on f is that it satisfies a Lipschitz condition, it is known that any estimator based on n data points will have an error rate (risk) of \Omega(n^{-2/(2+D)}). In other words a data size exponential in D is required to approximate f, which is unfeasible in high dimension. Fortunately, high-dimensional data often has low-intrinsic complexity due to correlations between features and/or irrelevant features. Common examples in machine learning are data that lie near a low-dimensional manifold, and sparse data. Some nonparametric regressors perform better in such situations as their risks depend only on the "intrinsic dimension" of the data, appropriately formalized. These methods which are "adaptive" to intrinsic dimension therefore require no dimension reduction! I will start with a short discussion of some adaptive traditional methods such as k-NN regression, then move on to computationally efficient adaptive methods which are less known. These efficient methods, namely Random Projection tree and tree-kernel-hybrid regressors, have strong theoretical guarantees which are verifiable on a wide range of real-world data. This talk is based on joint-work with Sanjoy Dasgupta and Nakul Verma.