[Sds-seminars] Fwd: [YINS] 11/11: Boaz Barak, "Understanding generalization requires rethinking deep learning?"
Dan Spielman
daniel.spielman at yale.edu
Mon Nov 9 15:53:04 EST 2020
---------- Forwarded message ---------
From: Hau, Emily <emily.hau at yale.edu>
Date: Mon, Nov 9, 2020 at 3:35 PM
Subject: [YINS] 11/11: Boaz Barak, "Understanding generalization requires
rethinking deep learning?"
To: yins at mailman.yale.edu <yins at mailman.yale.edu>
*[image: cid:8CAC2F06-ABAC-4072-82E0-5A0A6FFE1226 at its.yale.internal]*
YINS Distinguished Lecturer Seminar: Wednesday, November 11, 2020,
12:00-1:00pm
<https://yins.yale.edu/event/yins-seminar-boaz-barak-harvard-university>
*"Understanding generalization requires rethinking deep learning?"*
*Speaker: Boaz Barak*
*Gordon McKay Professor of Computer Science*
* Harvard John A. Paulson School of Engineering and Applied Sciences
Harvard University*
*To participate: *Join from PC, Mac, Linux, iOS or Android:*
https://yale.zoom.us/j/91899440944
<https://yale.zoom.us/j/91899440944>*
Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788
Meeting ID: 918 9944 0944
International numbers available: https://yale.zoom.us/u/abfSsSysEh
*Abstract: *In classical statistical learning theory, we can place bounds
on the generalization gap - the difference between the empirical
performance of a learned classifier on its training set and the population
performance on unseen test examples. Such bounds are hard to prove for deep
learning. There is also empirical evidence that they are simply not true
and deep-learning algorithms actually do have non-vanishing generalization
gaps.
In this talk we will see that there is a variant of supervised deep
learning that does have small generalization gaps, both in practice and in
theory. This variant is “Self-Supervised + Simple fit” (SSS) algorithms
that are obtained by first using self-supervision to learn a complex
representation of the (label free) training data, and then fitting a simple
(e.g., linear) classifier to the labels. Such classifiers have become
increasingly popular in recent years, as they offer several practical
advantages and have been shown to approach state-of-art results.
We show that (under assumptions described below) the generalization gap of
such classifiers tends to zero as long as the complexity of the simple
classifier is asymptotically smaller than the number of training samples.
Our bound is independent of the complexity of the representation, which
can use an arbitrarily large number of parameters. Our bound holds assuming
that the learning algorithm satisfies certain noise-robustness (adding a
small amount of label noise causes small degradation in performance) and
rationality (getting the wrong label is not better than getting no label at
all) properties. These conditions hold widely across many standard
architectures. We complement this result with an empirical study,
demonstrating that the generalization gap is in fact small in practice and
our bound is non-vacuous for many popular representation-learning based
classifiers on CIFAR-10 and ImageNet, including SimCLR, AMDIM and BigBiGAN.
The talk will not assume any specific background in machine learning, and
should be accessible to a general mathematical audience. Joint work with
Yamini Bansal and Gal Kaplun.
*Upcoming:*
*November 18, 2020, 12:00pm YINS Seminar: Daniel Roy (University of
Toronto)
<https://yins.yale.edu/event/yins-seminar-daniel-roy-university-toronto>*
Emily E. H. Hau | Director, Programs and Partnerships
*Yale Institute for Network Science*
*Yale University*
17 Hillhouse Avenue | Room 341 | New Haven, CT 06511
c: (203) 273-7886
emily.hau at yale.edu
_______________________________________________
YINS mailing list
YINS at mailman.yale.edu
https://mailman.yale.edu/mailman/listinfo/yins
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20201109/3b602dc1/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 83525 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20201109/3b602dc1/attachment.png>
More information about the Sds-seminars
mailing list