Projects
A Paralell Computing on the CUDA GPU devices in the Linear Optimization problems
Abstract. We present a parallel implementation of the randomized (1 + ε)-approximation algorithm for packing and covering linear programs presented by Koufogiannakis and Young (2013). Their approach builds on the ideas of the sublinear time algorithm of Grigoriadis and Khachiyan's (1995) and Garg and Könemann's non-uniform-increment amortization scheme (2007), and with high probability it computes feasible primal and dual solutions whose costs are within a factor of 1+ε of the optimal cost. In order to make their algorithm more parallelizable we also implemented a deterministic version of the algorithm, i.e. instead of updating a single random entry at each iteration we updated deterministically many entries at once. This slowed down a single iteration of the algorithm but allowed for larger step-sizes which lead to fewer iterations. We use NVIDIA's parallel computing architecture CUDA for the parallel environment. We report a speedup over the times reported by Koufogiannakis and Young (2013) between one and two orders of magnitude.
Examples of matrices
- Zero-one matrices.
- elements are in the set {0,1}
- number of rows (m) = 5000
- number of columns (n) = 5000
- Rational matrices.
- elements are uniformly sampled from [0,100]
- number of rows (m) = 5100
- number of columns (n) = 3700
- Terrain Guarding matrices.
- number of rows (m) = 3000
- number of columns (n) = 3000