[Sds-seminars] S&DS In-Person Seminar, Matus Jan Telgarsky, 2/13, 4pm-5pm, "Searching for the implicit bias of deep learning"

elizavette.torres at yale.edu elizavette.torres at yale.edu
Thu Feb 9 09:11:31 EST 2023


 <https://statistics.yale.edu/>     <https://statistics.yale.edu/>
Department of Statistics and Data Science  

In-Person seminars will be held at Mason Lab 211, 9 Hillhouse Avenue with
the option of virtual participation (
<https://yale.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx?folderID=f
8b73c34-a27b-42a7-a073-af2d00f90ffa>
https://yale.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx?folderID=f8
b73c34-a27b-42a7-a073-af2d00f90ffa)

 <https://0.0.0.10/> 3:30pm -   Pre-talk meet and greet teatime - Dana
House, 24 Hillhouse Avenue 

 


Matus Jan Telgarsky, University of Illinois Urbana-Champaign




Date: Monday, February 13, 2023

Time: 4:00PM to 5:00PM

Location: Mason Lab 211
<http://maps.google.com/?q=9+Hillhouse+Ave%2C+New+Haven%2C+CT%2C+06511%2C+us
> see map 

9 Hillhouse Ave

New Haven, CT 06511

 <http://mjt.cs.illinois.edu/> Website

 

 

 

 

Title: Searching for the implicit bias of deep learning

 

Information and Abstract: 

 What makes deep learning special - why is it effective in so many settings
where other models fail? This talk will present recent progress from three
perspectives. The first result is approximation-theoretic: deep networks can
easily represent phenomena that require exponentially-sized shallow
networks, decision trees, and other classical models. Secondly, I will show
that their statistical generalization ability - namely, their ability to
perform well on unseen testing data - is correlated with their prediction
margins, a classical notion of confidence. Finally, comprising the majority
of the talk, I will discuss the interaction of the preceding two
perspectives with optimization: specifically, how standard descent methods
are implicitly biased towards models with good generalization. Here I will
present two approaches: the strong implicit bias, which studies convergence
to specific well-structured objects, and the weak implicit bias, which
merely ensures certain good properties eventually hold, but has a more
flexible proof technique.

 

Bio:  Matus Telgarsky is an assistant professor at the University of
Illinois, Urbana-Champaign, specializing in deep learning theory.  He was
fortunate to receive a PhD at UCSD under Sanjoy Dasgupta.  Other highlights
include: co-founding, in 2017, the Midwest ML Symposium (MMLS) with Po-Ling
Loh; receiving a 2018 NSF CAREER award; and organizing two Simons Institute
programs, one on deep learning theory (summer 2019), and one on
generalization (fall 2024).

 

For more details and upcoming events visit our website at
<http://statistics.yale.edu/> http://statistics.yale.edu/

 


Department of Statistics and Data Science


Yale University
24 Hillhouse Avenue
New Haven, CT 06511

t 203.432.0666
f 203.432.0633

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20230209/edaf564c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 2925 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20230209/edaf564c/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 260785 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20230209/edaf564c/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.png
Type: image/png
Size: 115820 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20230209/edaf564c/attachment-0001.png>


More information about the Sds-seminars mailing list