Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 101
PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, GRID AND CLOUD COMPUTING FOR ENGINEERING
Edited by:
Paper 45

Parallelization of the Direct Simulation Monte Carlo Method using the Partitioned Global Address Space Paradigm

N. Sengil

Department of Astronautical Engineering, University of Turkish Aeronautical Association, Ankara, Turkey

Full Bibliographic Reference for this paper
N. Sengil, "Parallelization of the Direct Simulation Monte Carlo Method using the Partitioned Global Address Space Paradigm", in , (Editors), "Proceedings of the Third International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 45, 2013. doi:10.4203/ccp.101.45
Keywords: PGAS, DSMC, MPI, OpenMP, CUDA, CAF, hypersonic, rarefied.

Summary
In the upper part of the atmosphere the mean-free-path is in the same scale with the characteristic length of the flow geometry. The ratio of the mean-free-path and characteristic length is known as Knudsen number. In high Knudsen number regimes, instead of continuum based equations molecular based solvers are used. The direct simulation Monte Carlo (DSMC) method is one of these methods [1]. One of the disadvantages of this method is the computational load in low regimes. To reduce the solution time, the DSMC method is generally parallelized assigning each part of the flow geometry to a different processor. This method is called domain decomposition. If a particle leaves the assigned sub-domain via the boundaries, this particle is transferred to the processor responsible for the new sub-domain. So each region is solved by a different processor working in parallel. Generally three different types of parallel programming paradigms gain recognition. These are directive based OpenMP, message passing based MPI (Message Passing Interface) and global address based PGAS (Partitioned Global Address Space) methods. In the OpenMP technique, programs are developed sequentially and only the codes between special OpenMP directives are calculated concurrently [2]. Developing codes with the Open MP is relatively easy. But generally OpenMP has been evaluated and shown to be insufficient for complicated problems. Currently, MPI is the leading model in parallel programming. MPI uses a two-sided communication model. But, to develop and debug the programs written with the MPI is difficult. Lately, the PGAS model has been introduced. In this model, parallelism is added into the programming languages itself. Coarray Fortran (CAF) and Unified Parallel C (UPC) are two important examples.

In this paper an MPI based DSMC solver is modified to incorporate the PGAS paradigm. After modification, a benchmark problem taken from the literature [3] is analyzed with the new DSMC solver. This DSMC solver iterated 90,000 times to calculate the pressure, temperature, density and flow velocity values of the hypersonic flow region without an interruption. For validation, the surface pressure coefficient on the cylindrical body is chosen. Both results are in close proximity. We concluded that although the PGAS paradigm is a new technique to parallelize computer programs, it is a promising candidate for the fast-growing multi-core and multiprocessor computing environments.

References
1
G.A. Bird, "Molecular Gas Dynamics and the Direct Simulation of Gas Flows", Clarendon Press, Oxford, 1994.
2
A.J. Wallcraft, "A Comparision of Co-Array Fortran and OpenMP Fortran for SPMD Programming", Journal of Supercomputing, 22, 231-250, 2002.
3
A.J. Lofthouse, L.C. Scalabrin, I.D. Boyd, "Velocity slip and temperature jump in hypersonic aerothermodynamics", Journal of Thermophysics and Heat transfer, 22(1), 38-49, 2008.

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £40 +P&P)