Primal-Dual Subgradient Methods for Convex Problems

CORE Discussion Paper No. 2005/67

37 Pages Posted: 10 Jul 2006

See all articles by Yurii Nesterov

Yurii Nesterov

Catholic University of Louvain (UCL) - Center for Operations Research and Econometrics (CORE)

Date Written: October 2005


In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primal-dual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set. We present the variants of subgradient schemes for nonsmooth convex minimization, minimax problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower complexity bounds.

Keywords: convex optimization, subgradient methods, non-smooth optimization, minimax problems, saddle points, variational inequalities, stochastic optimization, black-box methods, lower complexity bounds

Suggested Citation

Nesterov, Yurii, Primal-Dual Subgradient Methods for Convex Problems (October 2005). CORE Discussion Paper No. 2005/67, Available at SSRN: or

Yurii Nesterov (Contact Author)

Catholic University of Louvain (UCL) - Center for Operations Research and Econometrics (CORE) ( email )

34 Voie du Roman Pays
1348 Louvain-la-Neuve, 1348