Options
Nonasymptotic Bounds for Stochastic Optimization With Biased Noisy Gradient Oracles
Date Issued
01-03-2023
Author(s)
Bhavsar, Nirav
Indian Institute of Technology, Madras
Abstract
We introduce biased gradient oracles to capture a setting where the function measurements have an estimation error that can be controlled through a batch size parameter. Our proposed oracles are appealing in several practical contexts, for instance, risk measure estimation from a batch of independent and identically distributed samples, or simulation optimization, where the function measurements are 'biased' due to computational constraints. In either case, increasing the batch size reduces the estimation error. We highlight the applicability of our biased gradient oracles in a risk-sensitive reinforcement learning setting. In the stochastic nonconvex optimization context, we analyze a variant of the randomized stochastic gradient algorithm with a biased gradient oracle. We quantify the convergence rate of this algorithm by deriving nonasymptotic bounds on its performance. Next, in the stochastic convex optimization setting, we derive nonasymptotic bounds for the last iterate of a stochastic gradient descent algorithm with a biased gradient oracle.
Volume
68