Logo

Instituto de Ingeniería Matemática y Computacional

Facultad de Matemáticas - Escuela de Ingeniería

Noticias

El Instituto de Ingeniería Matemática y Computacional (IMC) los saluda atentamente y los invita al seminario que se dictará esta semana.

 

La charla se efectuará sólo vía online este miércoles 10 de agosto a las 13 horas. Quienes deseen conectarse tendrán acceso vía Zoom, cuyo enlace se puede obtener escribiendo al email Esta dirección de correo electrónico está siendo protegida contra los robots de spam. Necesita tener JavaScript habilitado para poder verlo..

La información sobre la charla la pueden encontrar en el poster adjunto y abajo. 


Título: Best-case lower bounds in online learning

Expositor: Nishant Mehta

Afiliación: Department of Computer Science, University of Victoria

Fecha: 10 de agosto de 2022, 1:00 PM-2:00 PM

Lugar: Vía Zoom

Abstract:

I will begin by introducing an online learning problem motivated by group fairness considerations. It is standard in online learning to prove sublinear upper bounds on the regret, a key performance measure in online learning and online convex optimization. An alternative concept is a best-case lower bound — the largest improvement an algorithm can obtain relative to the single best action in hindsight. Best-case lower bounds have connections to fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for the popular decision-theoretic online learning (DTOL) setting that satisfy a notion of group fairness. A parallel motivation of this work is to better understand the adaptivity of a learning algorithm; while some algorithms provably exhibit certain types of adaptivity, we show that they are provably prohibited from obtaining another desirable form of adaptivity (related to what is known as the shifting regret). Our contributions are a general method to provide best-case lower bounds in Follow the Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are often of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, and adaptive gradient methods. We also show that the popular linearized version of FTRL can attain negative linear regret and hence does not admit non-trivial best-case lower bounds in general.

This is joint work with Cristóbal Guzmán and Ali Mortazavi.

Bio:

Nishant Mehta is an Assistant Professor at the University of Victoria in Victoria, Canada. Prior to that, he was a postdoctoral researcher at Centrum Wiskunde & Informatica (CWI), working with Peter Grünwald. He previously was a postdoctoral research fellow at the Australian National University and NICTA with Bob Williamson. He completed his Ph.D. in Computer Science from Georgia Tech in 2013, advised by Alexander Gray. He regularly serves as an area chair for NeurIPS and COLT and is an action editor for TMLR. His past and present research is in the general area of machine learning theory, previously primarily in statistical learning theory and currently with a bias towards online learning and sequential prediction. Some areas of immediate focus right now are incentive-compatible learning, fairness in machine learning, and considering the long-term impact of statistical decision-making systems.

 

Seminario online Mehta