期刊名称:CORE Discussion Papers / Center for Operations Research and Econometrics (UCL), Louvain
出版年度:2012
卷号:2012
期号:1
出版社:Center for Operations Research and Econometrics (UCL), Louvain
摘要:We consider a new class of huge- scale problems, the problems with sparse subgradients.
The most important functions of this type are piece- wise linear. For optimization
problems with uniform sparsity of corresponding linear operators, we suggest a very
efficient implementation of subgradient iterations, which total cost depends
logarithmically in the dimension. This technique is based on a recursive update of the
resu lts of matrix/vector products and the values of symmetric functions. It works well, for
example, for matrices with few nonzero diagonals and for max- type functions.
We show that the updating technique can be efficiently coupled with the simplest
subgradien t methods, the unconstrained minimization method by B. Polyak, and the
constrained minimization scheme by N. Shor. Similar results can be obtained for a new
non - smooth random variant of a coordinate descent scheme. We present also the
promising results of preliminary computational experiments.