**Keywords:**stochastic variational inequalities, pseudo-monotonicity, extragradient method, stochastic approximation, variance reduction

We propose an extragradienrt method with stepsizes bounded away from zero for stochastic variational inequalities requiring only pseudo-monotonicity. We provide convergence and complexity analysis, allowing for an unbounded feasible set, unbounded operator, non-uniform variance of the oracle and, also, we do not require any regularization. Alongside the stochastic approximation procedure, we iteratively reduce the variance of the stochastic error. Our method attains the optimal oracle complexity O(1/e²) (up to a logarithmic term), and a faster rate O(1/K) in terms of the mean (quadratic) natural residue and the D-gap functuion, where K is the number of iterations required for a given tolerance e>0. Such convergence rate represents an acceleration with respect to the stochastic error. Teh generated sequence also enjoys a new feature: the generated sequence is bounded in L^p if the stochastic error has finite p-moments. Explicit estimates for the convergence rate, the complexity and the p-moments are given, depending on problem parameters and the distance between the initial iterate and the solution set. Moreover, sharper constants are posible if the variance is uniform over the solution set or the feasible set. Our results provide new classes of stochastic variational inequalities for which a convegance rate of (1/K) holds in terms of the mean-square distance to the solution set. Our analysis includes the distributed solution of pseudo-monotone Cartesian variational inequalities under partial coordination of the parameters between users of a network.

**MSC 2000:**65K15

**Download:**

iusphil2nn.pdf |