Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 95
Edited by: P. Iványi and B.H.V. Topping
Paper 53

Framework for the Hybrid Parallelisation of Simulation Codes

R.-P. Mundani1, M. Ljucovic2 and E. Rank1

1Technische Universität München, Munich, Germany
2Western Michigan University, Kalamazoo MI, United States of America

Full Bibliographic Reference for this paper
R.-P. Mundani, M. Ljucovic, E. Rank, "Framework for the Hybrid Parallelisation of Simulation Codes", in P. Iványi, B.H.V. Topping, (Editors), "Proceedings of the Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 53, 2011. doi:10.4203/ccp.95.53
Keywords: parallelisation, hybrid, MPI, OpenMP, framework, simulation.

Therefore, we present a framework for hybrid parallelisation which is based on a job model that allows the user to incorporate sequential code with manageable effort and code modifications in order to be executed in parallel. The primary application domain of this framework are simulation codes from engineering disciplines as those are in many cases still sequential and as a result of their memory and runtime demands prominent candidates for parallelisation. While a solely multithreaded approach is quite easy to achieve, it usually does not scale with larger numbers of threads, lack of sufficient support in complex task design [1] and according to [2] discards properties such as predictability and determinism. In the case of hybrid parallelisation, i.e. the interplay of distributed and shared memory programming it becomes even worse, for instance as a result of insufficient thread safety within MPI calls [3]. Hence, multithread code within MPI programs needs special treatment in order to run properly on parallel and distributed environments, again, something the user does not need to take care of because these problems are addressed by our framework.

The framework is based on a strict job scheduling, where such a job can be anything from a complete program up to a single instruction. Those jobs, together with their dependencies on the results of other jobs, are defined by the user on any desired level of granularity. The difference and main advantage compared with classical parallelisation is that the user does not need to care about communication and synchronisation of the single jobs as well as data distribution and load balancing which is all inherently carried out by the framework. This enables advancing from sequential to parallel codes with less effort as the complexity of the parallel program is (mostly) hidden from the user. Comparing the framework to an efficient and `pure' MPI implementation of a Jacobi solver for linear equation systems shows already excellent results, especially when considering that the parallel implementation using our framework has been derived from a sequential version with almost no code changes.

As the framework offers a plenitude of further possibilities, future steps could comprise basic monitoring and fault tolerance properties as well as its application using different hardware such as GPUs or the Cell Broadband Engine.

E. Ayguadé, N. Copty, A. Duran, J. Hoeflinger, Y. Lin, F. Massaioli, X. Teruel, P. Unnikrishnan, G., Zhang, "The design of OpenMP tasks", IEEE Transactions on Parallel and Distributed Systems, 20(3), 404-418, 2009. doi:10.1109/TPDS.2008.105
E.A. Lee, "The problem with threads", Computer, 39(5), 33-42, 2006. doi:10.1109/MC.2006.180
P. Balaji, D. Buntinas, D. Goodell, W. Gropp, R. Thakur, "Fine-grained multithreading support for hybrid threaded MPI programming", Int. J. of HPC Applications, 24(1), 49-57, 2010. doi:10.1177/1094342009360206

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description