David Tyler, Rutgers University, presents this seminar.
Penalized likelihood approaches for estimating covariance matrices are studied. The properties of such penalized approaches depend on the particular choice of the penalty function. In this talk, we introduce a class of non-smooth penalty functions for covariance matrices, and demonstrate how the corresponding penalized likelihood method leads to a grouping of the eigenvalues. We refer to this method as lassoing eigenvalues or as the elasso. A particularly promising member of this class of non-smooth penalties arises from an application of the Marčenko-Pasteur law.
The elasso in itself is not robust since is based on the sample covariance matrix. Two possible approaches to make the elasso more robust are considered. The first approach is to simply use a robust plug-in method derived by replacing the sample covariance matrix with a robust estimate of scatter. The pluses and minuses of such a plug-in method are discussed. The second approach is to use penalized M-estimators of covariance matrices. Both the M-estimators and the elasso penalty function have the property of being geodesically convex, and hence the corresponding penalized M-estimators have unique solutions. Finally, we present a simple re-weighted algorithm for computing a penalized M-estimators which always converges to the correct solution. This work is joint with Mengxi Yi, a graduate student at Rutgers University.