Dissertation Talk: Parallel Machine Learning Using Concurrency Control

Seminar | May 1 | 5-6 p.m. | 465H Soda Hall

 Xinghao Pan, EECS

 Electrical Engineering and Computer Sciences (EECS)

Many machine learning algorithms iteratively process datapoints and transform some global model parameters. It has become increasingly impractical to serially execute such iterative algorithms as processor speeds fail to catch up to the explosive growth in dataset sizes.

To address these problems, the machine learning community has turned to two parallelization strategies: bulk synchronous parallel (BSP) and coordination-free. BSP algorithms partition computational work among workers, with occasional synchronization at global barriers, but is only applicable to `embarrassingly parallel' problems where work is trivially factorizable. Coordination-free algorithms simply allow concurrent processors to execute in parallel, interleaving transformations and possibly introducing inconsistencies. Theoretical analysis is then required to prove, under assumptions on the problem and system, that the coordination-free algorithm produces a reasonable approximation to the desired outcome.

In this dissertation, we propose and explore an alternative approach by applying concurrency control to manage parallel transformations in machine learning algorithms. We identify points of possible interference between parallel iterations by examining the semantics of the serial algorithm. Coordination is then introduced to either avoid or resolve such conflicts, whereas non-conflicting transformations are allowed to execute concurrently. Our parallel algorithms are thus engineered to produce the same output as the serial machine learning algorithm, preserving the serial algorithm's theoretical guarantees of correctness while maximizing concurrency.

We demonstrate the feasibility of our approach to parallelizing a variety of machine learning algorithms, including non-parametric unsupervised learning, graph clustering, discrete optimization, and sparse convex optimization. We prove and empirically verify that our parallel algorithms produce equivalent output to their serial counterparts. We also theoretically analyze the expected concurrency of our parallel algorithms, and empirically demonstrated their scalability.

 xinghao@berkeley.edu, 510-859-5293