Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • Research Outputs
  • Fundings & Projects
  • People
  • Statistics
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Indian Institute of Technology Madras
  3. Publication1
  4. Nonasymptotic Bounds for Stochastic Optimization With Biased Noisy Gradient Oracles
 
  • Details
Options

Nonasymptotic Bounds for Stochastic Optimization With Biased Noisy Gradient Oracles

Date Issued
01-03-2023
Author(s)
Bhavsar, Nirav
Prashanth L A 
Indian Institute of Technology, Madras
DOI
10.1109/TAC.2022.3159748
Abstract
We introduce biased gradient oracles to capture a setting where the function measurements have an estimation error that can be controlled through a batch size parameter. Our proposed oracles are appealing in several practical contexts, for instance, risk measure estimation from a batch of independent and identically distributed samples, or simulation optimization, where the function measurements are 'biased' due to computational constraints. In either case, increasing the batch size reduces the estimation error. We highlight the applicability of our biased gradient oracles in a risk-sensitive reinforcement learning setting. In the stochastic nonconvex optimization context, we analyze a variant of the randomized stochastic gradient algorithm with a biased gradient oracle. We quantify the convergence rate of this algorithm by deriving nonasymptotic bounds on its performance. Next, in the stochastic convex optimization setting, we derive nonasymptotic bounds for the last iterate of a stochastic gradient descent algorithm with a biased gradient oracle.
Volume
68
Subjects
  • Biased gradient oracl...

  • Gaussian smoothing

  • nonasymptotic bounds

  • simultaneous perturba...

  • zeroth-order stochast...

Indian Institute of Technology Madras Knowledge Repository developed and maintained by the Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback