摘要:The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using $\ell_{1}$ regularization to control the number of zeros in the precision matrix $\boldsymbol{\Theta}=\boldsymbol{\Sigma}^{-1}$ [2, 11]. The R package GLASSO [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of GLASSO can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform GLASSO.