[Sds-seminars] S&DS Seminar, Qiaomin Xie, 10/24, 4pm-5pm, "Markovian Linear Stochastic Approximation: Bias and Extrapolation"
elizavette.torres at yale.edu
elizavette.torres at yale.edu
Mon Oct 24 09:37:31 EDT 2022
In-Person seminars will be held at Dunham Lab, 10 Hillhouse Ave., Room 220, with an option of remote participation via zoom.
<x-apple-data-detectors://10/> 3:30pm - Pre-talk meet and greet, Suite 222, Room 228
<https://statistics.yale.edu/> <https://statistics.yale.edu/> Department of Statistics and Data Science
We invite you to attend our in-person seminar.
<https://statistics.yale.edu/seminars/qiaomin-xie> Qiaomin Xie, University of Wisconsin-Madison
In-Person
Monday, October 24, 2022
4:00PM to 5:00PM
Dunham Lab. Room 220 <http://maps.google.com/?q=10+Hillhouse+Avenue%2C+New+Haven%2C+CT%2C+06511%2C+us> see map
10 Hillhouse Avenue
New Haven, CT 06511
Website <https://qiaominxie.github.io/>
Zoom Link:
https://yale.zoom.us/j/92411077917?pwd=aXhnTnFGRXFoaTVDczNjeFFKeWpTQT09 / Password: 24
Title: Markovian Linear Stochastic Approximation: Bias and Extrapolation
Information and Abstract:
We consider Linear Stochastic Approximation (LSA) with a constant stepsize and Markovian data. Viewing the LSA iterate as a Markov chain, we prove its convergence to a unique stationary distribution in Wasserstein distance and establish non-asymptotic, geometric convergence rates. Furthermore, we show that the bias vector of this limit admits an infinite series expansion w.r.t. the stepsize, and hence the bias is proportional to the stepsize up to higher order terms. This result stands in contrast with LSA under i.i.d. data, for which the bias vanishes. In fact, we show that the bias scales with the mixing time of the Markovian data. With the above characterization, one can employ Richardson-Romberg extrapolation with m stepsizes to eliminate the m−1 leading terms in the bias expansion, resulting in an exponentially smaller bias.
The above results give a recipe for approaching the best of three worlds: (1) use a constant stepsize to achieve fast, geometric convergence of the optimization error, (2) average the iterates to eliminate the asymptotic variance, and (3) employ extrapolation to order-wise reduce the asymptotic bias. Our results immediately apply to the Temporal Difference learning algorithm with linear function approximation.
<x-apple-data-detectors://10/> 3:30pm - Pre-talk meet and greet, room 222
Zoom Link: Join from PC, Mac, Linux, iOS or Android: https://yale.zoom.us/j/92411077917?pwd=aXhnTnFGRXFoaTVDczNjeFFKeWpTQT09
Password: 24
Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788
Meeting ID: 924 1107 7917
Department of Statistics and Data Science
Yale University
24 Hillhouse Avenue
New Haven, CT 06511
t 203.432.0666
f 203.432.0633
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20221024/90bf06c6/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 2925 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20221024/90bf06c6/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 22767 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20221024/90bf06c6/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 5567 bytes
Desc: not available
URL: <http://mailman.yale.edu/pipermail/sds-seminars/attachments/20221024/90bf06c6/attachment-0002.jpg>
More information about the Sds-seminars
mailing list