Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 101
PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, GRID AND CLOUD COMPUTING FOR ENGINEERING
Edited by:
Paper 49

Unified Design for Parallel Execution of Coupled Simulations using the Discrete Particle Method

X. Besseron, F. Hoffmann, M. Michael and B. Peters

Faculty of Science, Technology and Communication, University of Luxembourg, Luxembourg

Full Bibliographic Reference for this paper
X. Besseron, F. Hoffmann, M. Michael, B. Peters, "Unified Design for Parallel Execution of Coupled Simulations using the Discrete Particle Method", in , (Editors), "Proceedings of the Third International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 49, 2013. doi:10.4203/ccp.101.49
Keywords: granular matter, discrete element method, domain decomposition, parallel computing, load-balancing.

Summary
Granular materials and simulation of granular materials are widely used in industry. The discrete element method (DEM) is well suited for simulation of motion and chemical conversion of granular materials [1]. Parallel and distributed machines provide the computation power required by such costly simulations.

The Discrete Particle Method (DPM) software based on the discrete particle method (DPM) [2] is an advanced numerical simulation tool that implements the discrete element method (DEM). It supports multi-physics simulations, in particular motion and chemical conversion.

The basic workflow of a DPM simulation is organized as a main iterative time loop. Inside a timestep, two main operations take place. The interaction step considers all particle pairs and calculates the interaction resultants (e.g. force, heat flux) which are accumulated in each particle. Thereafter, the integration step updates the state (i.e. position, temperature, etc.) of all particles.

DPM offers different simulation modules to treat the different physical and chemical properties. The enhanced design presented in this paper allows to transparently couple simulation modules in parallel execution. The simulation module's interface is designed to reflect the two major steps of the workflow: interactions and the integration. The simulation driver builds an unified timebase to schedule and execute all the modules involved according to their own timesteps.

To support the distributed execution platforms such as high performance computing (HPC) clusters, we provide a parallel simulation driver. DPM parallelization is based on the classical scheme of domain decomposition. The simulation domain contains all particles and is statically divided into cells. They correspond to regular subdivisions of the domain with a fixed size. For parallel execution, a partitioning algorithm creates groups of cells, called a partition. Each partition is then assigned to a given processor for the execution.

The enhanced design of DPM is a work-in-progress. The current implementation of the parallel simulation driver is based on the message passing interface (MPI) [3] communication library and the orthogonal recursive bisection (ORB) [4] partitioning algorithm. Experimental results study the behavior of the ORB partitioner. The choice of the cutting plane is a critical parameters to obtain good performance. The scalability study shows that a parallel execution with 64 processes provides a speedup of 17.

References
1
P.W. Cleary, "Large scale industrial DEM modelling", In Engineering Computations, pages 169-204, 2004.
2
K. Samiei, B. Peters, "The discrete particle method (DPM): An advanced numerical simulation tool for particulate applications", In Proceedings of the IV European Conference on Computational Mechanics: Solids, Structures and Coupled Problems in Engineering, May 2010.
3
MPI: A Message-Passing Interface Standard, May 1994.
4
M.J. Berger, S.H. Bokhari, "A partitioning strategy for non-uniform problems on multiprocessors". IEEE Trans. Comput., 36(5):570-580, 1987.

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £40 +P&P)