摘要:AbstractWhile recent advances in online learning arising from the universal prediction perspective have led to algorithms with prediction performance guarantee, these techniques are not able to cope with high-dimensional data. This paper analyses a random projection gradient descent (RP-GD) algorithm which addresses this challenge and yields low theoretical regret bound and computationally efficient algorithm. It is shown that the performance of the algorithm converges to the performance of the best offline, computationally complex, high-dimensional algorithm in hindsight.