Dissertation talk : Efficient learning algorithms with limited information

Seminar: Departmental | May 9 |
2-3 p.m. | 380 Soda Hall

Anindya De,
UC Berkeley

N/A

Abstract : Since the seminal work of Valiant (1984), the Probably Approximately Correct (PAC) model has been the standard model for learning. However, for specific classes of functions, it is conceivable that one can learn in far weaker models / meet more stringent success criterion.

a) In the first part of this work, we will look at learning algorithms for halfspaces where the algorithm only has access to approximate values of correlation of the unknown function with each of the coordinates. Besides its applications in learning theory, this family of problems is of interest in social choice theory. The algorithms are based on exploiting the probability theoretic and Fourier analytic properties of halfspaces.

b) In the second part, we will look at learning algorithms which given access to positive examples of an unknown Boolean function (from a specific concept class) seeks to learn the uniform distribution over the positive examples. Besides being a natural example of unsupervised learning in the context of Boolean functions, it has connections to several aspects of complexity theory. We employ tools from learning theory, Sampling algorithms and convex programming.