TITLE: Asynchronous Parallel Computing in Signal Processing and Machine Learning
ABSTRACT:
The performance of the CPU core stopped improving around 2005. The Moore's law, however, continues to apply -- not to the single-thread performance -- but the number of cores in each computer. Today, at affordable prices, we can buy 64 CPU-cores workstations, thousand-core GPUs, and even eight-core cellphones. To take advantages of multiple cores, we must parallelize our algorithms -- otherwise, our algorithms won't run any faster on newer computers. For iterative parallel algorithms to have the strong performance, asynchrony is critical. Removing the synchronizations among different cores will eliminate core idling and reduce memory-access congestion. However, some cores now compute with out-of-date information. We study fixed-point iterations of a nonexpansive operator and show that randomized async-parallel iterations will almost surely converge to a fixed point, provided that the operator has a fixed point and the step size is properly chosen. As special cases, novel algorithms for linear equation systems, machine learning, distributed and decentralized optimization are introduced, and numerical performance will be presented for sparse logistic regression and others. This is joint work with Zhimin Peng (UCLA), Yangyang Xu (IMA), and Ming Yan (Michigan State).
Bio: Wotao Yin is a professor in the Department of Mathematics of UCLA. His research interests lie in computational optimization and its applications in image processing, machine learning, and other inverse problems. He received his B.S. in mathematics from Nanjing University in 2001, and then M.S. and Ph.D. in operations research from Columbia University in 2003 and 2006, respectively. Before moving to UCLA, he was with Rice University during 2006-13. He won NSF CAREER award in 2008 and Alfred P. Sloan Research Fellowship in 2009. His recent work has been in optimization algorithms for large-scale and distributed signal processing and machine learning problems.