[Sds-seminars] Peter Bartlett speaking on Wednesday at noon on Benign Overfitting

Dan Spielman daniel.spielman at yale.edu
Mon Nov 30 14:04:54 EST 2020


YINS Distinguished Lecturer Seminar: Peter Bartlett (UC Berkeley)
Event time:
Wednesday, December 2, 2020 - 12:00pm
Location:
Join from PC, Mac, Linux, iOS or Android: https://yale.zoom.us/j/97135219127
 See map(link is external) <http://maps.google.com/?q=%2C+%2C+%2C+%2C+us>
Event description:
*“Benign Overfitting”*

*Speaker: Peter Bartlett*



*Professor of Computer Science and Statistics, University of California at
BerkeleyAssociate Director of the Simons Institute for the Theory of
ComputingDirector of the Foundations of Data Science InstituteDirector of
the Collaboration on the Theoretical Foundations of Deep Learning*

*To participate:*

Join from PC, Mac, Linux, iOS or Android:
https://yale.zoom.us/j/97135219127(link
is external) <https://yale.zoom.us/j/97135219127>
    Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788
    Meeting ID: 971 3521 9127
    International numbers available: https://yale.zoom.us/u/abxwXKgpCp(link
is external) <https://yale.zoom.us/u/abxwXKgpCp>

*Talk Summary:* Classical theory that guides the design of nonparametric
prediction methods like deep neural networks involves a tradeoff between
the fit to the training data and the complexity of the prediction rule.
Deep learning seems to operate outside the regime where these results are
informative, since deep networks can perform well even with a perfect fit
to noisy training data. We investigate this phenomenon of ‘benign
overfitting’ in the simplest setting, that of linear prediction. We give a
characterization of linear regression problems for which the minimum norm
interpolating prediction rule has near-optimal prediction accuracy. The
characterization is in terms of two notions of effective rank of the data
covariance. It shows that overparameterization is essential: the number of
directions in parameter space that are unimportant for prediction must
significantly exceed the sample size.  It also shows an important role for
finite-dimensional data: benign overfitting occurs for a much narrower
range of properties of the data distribution when the data lies in an
infinite dimensional space versus when it lies in a finite dimensional
space whose dimension grows faster than the sample size. We discuss
implications for deep networks, for robustness to adversarial examples, and
for the rich variety of possible behaviors of excess risk as a function of
dimension, and we describe extensions to ridge regression and barriers to
analyzing benign overfitting based on model-dependent generalization
bounds.  Joint work with Phil Long, Gábor Lugosi, and Alex Tsigler.

*Speaker bio: *Peter Bartlett is professor of Computer Science and
Statistics at the University of California at Berkeley, Associate Director
of the Simons Institute for the Theory of Computing, Director of the
Foundations of Data Science Institute, and Director of the Collaboration on
the Theoretical Foundations of Deep Learning. His research interests
include machine learning and statistical learning theory, and he is the
co-author of the book Neural Network Learning: Theoretical Foundations. He
has been Institute of Mathematical Statistics Medallion Lecturer, winner of
the Malcolm McIntosh Prize for Physical Scientist of the Year, and
Australian Laureate Fellow, and he is a Fellow of the IMS, Fellow of the
ACM, and Fellow of the Australian Academy of Science.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20201130/b3d8f6b7/attachment.html>


More information about the Sds-seminars mailing list