<div dir="ltr"><div class="gmail-group-header" style="box-sizing:inherit;font-family:Mallory,Verdana,Arial,Helvetica,sans-serif;font-size:17px"><span class="gmail-field gmail-field-name-title gmail-field-type-ds gmail-field-label-hidden" style="box-sizing:inherit"><span style="box-sizing:inherit"><span class="gmail-odd gmail-first gmail-last" style="box-sizing:inherit"><h1 class="gmail-title" style="box-sizing:inherit;font-weight:300;padding:0px;font-feature-settings:"kern","liga","dlig";font-size:1.76471em;line-height:normal;font-stretch:normal;color:rgb(0,60,118);text-transform:uppercase;display:inline-block">S&DS|CS JOINT SEMINAR, SIMON S. DU</h1></span></span></span><span class="gmail-field gmail-field-name-field-university gmail-field-type-text gmail-field-label-hidden" style="box-sizing:inherit;margin-left:5px"><span style="box-sizing:inherit"><span class="gmail-odd gmail-first gmail-last" style="box-sizing:inherit">Institute for Advanced Study of Princeton</span></span></span><div class="gmail-field gmail-field-name-field-abstract-title gmail-field-type-text gmail-field-label-hidden" style="box-sizing:inherit"><div class="gmail-field-items" style="box-sizing:inherit"><div class="gmail-field-item even" style="box-sizing:inherit;font-size:20px;font-weight:600;line-height:1.2;margin-bottom:1em;margin-top:0.5em">Foundations of Learning Systems with (Deep) Function Approximators</div></div></div></div><div class="gmail-group-left" style="box-sizing:inherit;float:left;width:auto;padding-right:14.4844px;max-width:30%;font-family:Mallory,Verdana,Arial,Helvetica,sans-serif;font-size:17px"><div class="gmail-field gmail-field-name-field-image gmail-field-type-image gmail-field-label-hidden" style="box-sizing:inherit"><div class="gmail-field-items" style="box-sizing:inherit"><div class="gmail-field-item even" style="box-sizing:inherit"><img src="https://statistics.yale.edu/sites/default/files/styles/user_picture_node/public/simon_du.jpg?itok=vsJVn98i" width="400" height="480" alt="" style="box-sizing: inherit; border: 0px; max-width: 100%; height: auto; vertical-align: bottom;"></div></div></div></div><div class="gmail-group-right" style="box-sizing:inherit;float:left;width:auto;max-width:65%;padding-left:21.7344px;font-family:Mallory,Verdana,Arial,Helvetica,sans-serif;font-size:17px"><div class="gmail-field gmail-field-name-field-event-time gmail-field-type-datetime gmail-field-label-hidden" style="box-sizing:inherit"><div class="gmail-field-items" style="box-sizing:inherit"><div class="gmail-field-item even" style="box-sizing:inherit;color:rgb(0,60,118);font-size:18px;line-height:1.4"><span class="gmail-date-display-single" style="box-sizing:inherit">Monday, February 03, 2020<span class="gmail-date-display-range" style="box-sizing:inherit;float:left;width:397.297px"><span class="gmail-date-display-start" style="box-sizing:inherit">4:00PM</span> to <span class="gmail-date-display-end" style="box-sizing:inherit">5:00PM</span></span></span></div></div></div><div class="gmail-field gmail-field-name-field-location gmail-field-type-location gmail-field-label-hidden" style="box-sizing:inherit"><div class="gmail-field-items" style="box-sizing:inherit"><div class="gmail-field-item even" style="box-sizing:inherit"><div class="gmail-location gmail-vcard" style="box-sizing:inherit"><div class="gmail-adr" style="box-sizing:inherit"><span class="gmail-fn" style="box-sizing:inherit">YINS</span> <span class="gmail-map-icon" style="box-sizing:inherit;margin-left:0.25em;font-size:0.925em;line-height:1.55;letter-spacing:0.05em;word-spacing:0.05em;text-transform:lowercase;font-feature-settings:"smcp""><a href="http://maps.google.com/?q=17+Hillhouse+Avenue%2C+Rm.+328%2C+New+Haven%2C+CT%2C+06511%2C+us" style="box-sizing:inherit;outline:none;line-height:inherit;color:rgb(40,109,192)">see map</a> </span><div class="gmail-street-address" style="box-sizing:inherit">17 Hillhouse Avenue, Rm. 328</div><span class="gmail-locality" style="box-sizing:inherit">New Haven</span>, <span class="gmail-region" style="box-sizing:inherit">CT</span> <span class="gmail-postal-code" style="box-sizing:inherit">06511</span></div></div></div></div></div><div class="gmail-field gmail-field-name-field-website gmail-field-type-link-field gmail-field-label-hidden" style="box-sizing:inherit"><div class="gmail-field-items" style="box-sizing:inherit"><div class="gmail-field-item even" style="box-sizing:inherit"><a href="http://simonshaoleidu.com/" style="box-sizing:inherit;text-decoration-line:none;outline:none;line-height:1.5;color:rgb(0,60,118);font-size:16px">Website</a></div></div></div></div><div class="gmail-group-footer" style="box-sizing:inherit;clear:both;padding-top:15px;font-family:Mallory,Verdana,Arial,Helvetica,sans-serif;font-size:17px"><div class="gmail-field gmail-field-name-field-event-description gmail-field-type-text-with-summary gmail-field-label-hidden" style="box-sizing:inherit"><div class="gmail-field-items" style="box-sizing:inherit"><div class="gmail-field-item even" style="box-sizing:inherit"><p style="box-sizing:inherit;margin:0px 0px 1em;padding:0px"><span style="box-sizing:inherit">Function approximators, such as deep neural networks, play a crucial role in building intelligent systems that make predictions and decisions. In this talk, I will discuss my work on understanding, designing, and applying function approximators.</span></p><p style="box-sizing:inherit;margin:0px 0px 1em;padding:0px"><span style="box-sizing:inherit">First, I will focus on understanding deep neural networks. The main result is that the over-parameterized neural network is equivalent to a new kernel, Neural Tangent Kernel. This equivalence implies two surprising phenomena: 1) the simple algorithm gradient descent provably finds the global optimum of the highly non-convex empirical risk, and 2) the learned neural network generalizes well despite being highly over-parameterized.  Furthermore, this equivalence helps us design a new class of function approximators: we transform (fully-connected, graph, convolutional) neural networks to (fully-connected, graph, convolutional) Neural Tangent Kernels, which achieve superior performance on standard benchmarks. </span></p><p style="box-sizing:inherit;margin:0px 0px 1em;padding:0px"><span style="box-sizing:inherit">In the second part of the talk, I will focus on applying function approximators to decision-making, aka reinforcement learning, problems. In sharp contrast to the (simpler) supervised prediction problems, solving reinforcement learning problems requires an exponential number of samples, even if one applies function approximators.  I will then discuss what additional structures that permit statistically efficient algorithms.</span></p><p style="box-sizing:inherit;margin:0px 0px 1em;padding:0px"><span style="box-sizing:inherit">Bio: Simon S. Du is a postdoc at the Institute for Advanced Study of Princeton, hosted by Sanjeev Arora. He completed his Ph.D. in Machine Learning at Carnegie Mellon University, where he was co-advised by Aarti Singh and Barnabás Póczos. Previously, he studied EECS and EMS at UC Berkeley. He has also spent time at Simons Institute and research labs of Facebook, Google, and Microsoft. His research interests are broadly in machine learning, with a focus on the foundations of deep learning and reinforcement learning.</span></p><p style="box-sizing:inherit;margin:0px 0px 1em;padding:0px"><span style="box-sizing:inherit"><br></span></p></div></div></div></div></div>