Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 84
PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY
Edited by: B.H.V. Topping, G. Montero and R. Montenegro
Paper 121

Parallel Discrete Element Simulation of a Heterogeneous Particle System

R. Kacianauskas2, A. Maknickas1, A. Kaceniauskas1, D. Markauskas2 and R. Balevicius2

1Parallel Computing Laboratory,
2Laboratory of Numerical Modelling,
Vilnius Gediminas Technical University, Vilnius, Lithuania

Full Bibliographic Reference for this paper
R. Kacianauskas, A. Maknickas, A. Kaceniauskas, D. Markauskas, R. Balevicius, "Parallel Discrete Element Simulation of a Heterogeneous Particle System", in B.H.V. Topping, G. Montero, R. Montenegro, (Editors), "Proceedings of the Fifth International Conference on Engineering Computational Technology", Civil-Comp Press, Stirlingshire, UK, Paper 121, 2006. doi:10.4203/ccp.84.121
Keywords: particle compacting, discrete element method, heterogeneous poly-dispersed granular material, parallel computing, spatial domain decomposition, distributed memory PC clusters.

Summary
This paper presents a parallel DEM software developed for simulating granular material on distributed memory PC clusters. Static domain decomposition and message passing inter-processor communication are implemented in the DEM code. A novel algorithm for moving particles, that exchange processors, is incorporated in the domain decomposition framework. A particular manifest of this paper is two-fold: to investigate computational performance of the developed software and to contribute to the understanding of algorithmic aspects related to poly-disperse properties of the heterogeneous granular material.

The granular material is regarded as a system of a finite number of spherical particles. The inter-particle contact model considers a combination of elasticity, viscous damping and friction force effects. A detailed description of the DEM technique applied may also be found in [1].

The parallel algorithms are implemented in the FORTRAN 90 code DEMMAT_PAR. Inter-processor communication is implemented in the code by the subroutines of the message passing library MPI [2]. The communication is performed by the MPI routines MPI_ISEND, MPI_REQUEST_FREE and MPI_RECEIVE. The non-blocking communication routines significantly improve parallel efficiency of the code. Computations were performed on the PC cluster VILKAS (NPACI Rocks Cluster, RedHat Linux Enterprise 3.0). The cluster consisted of 20 processors (Intel Pentium 4, 3.2GHz, 1GB RAM for a processor). It was connected by D-Link DGS 1224T Gigabit Smart Switch (24-Ports 10/100/1000Mbps Base-T Module).

A numerical illustration addresses tri-axial compacting of granular material by rigid walls. Two types of benchmark problems regarding mono-disperse and heterogeneous poly-disperse granular material were solved. Two examples of the mono-disperse material were also considered. They present 20000 particles with dmin=2.2 mm and 100000 particles with the diameter d=1.3 mm. Then initial composition of the particles presents a regular lattice type structure, where particles are embedded in the centers of the cells. Poly-dispersed material is composed of particles with a specified normal particular size distribution. It is presented by two sets containing exactly 19890 and 100037 particles with dmin=1.031 mm and dmax=4.466 mm and dmin=0.603 mm and dmax=2.613 mm for each set, respectively. The initial irregular particle arrangement is generated by employing the algorithm presented by Jiang et al. [3].

One-dimensional strip-type spatial decomposition containing a roughly equal number of particles to ensure the static load balancing on the homogeneous PC clusters is applied. In the present work, the speed-up equal to 8.81 has been obtained for poly-dispersed material and the speed-up equal to 9.22 has been attained for mono-dispersed material on 10 processors.

The parallel algorithm and software developed are applied to the simulation of compacting granular material. Simulation results are presented in terms of the wall pressures, the coordination number and the packing density. Based on the current investigation, the following concluding remarks may be drawn:

  • The measured parallel performance of the benchmark problems shows that the developed parallel DEM software is well designed for the distributed memory PC clusters. The speed-up of the parallel algorithms based on the spatial domain decomposition and the implemented inter-processor communication can compete with that obtained by other researchers and reported in the literature.
  • Increasing heterogeneity restricts a subdivision of the computational domain into cells because a minimal cell size is limited by a maximal particle diameter. This affects computational performance of the software, especially, for a smaller number of particles. Increasing heterogeneity by up to 3.5 times increases the run time. Heterogeneity of material also has a negative influence on the parallel efficiency of the code because of increasing inter-processor data transfer.
  • The application of the developed parallel DEM software to the simulating particles compacting allows for investigation of various scientific and industrial issues including heterogeneity. However, the numerical efficiency can be improved by intelligent handling of the particular problem taking into account its physical nature.

References
1
R. Balevicius, R. Kacianauskas, A. Dziugys, A. Maknickas, K. Vislavicius, "DEMMAT code for numerical simulation of multi-particle dynamics". Information Technology and Control, 34(1), 71-78, 2005.
2
P. Pacheco, "Parallel programming with MPI", Morgan Kaufmann Publishers Inc., San Francisco, 1997.
3
M.J. Jiang, J.M. Konrad, S. Leroueil, "An efficient technique for generating homogeneous specimens for DEM studies", Computers and Geotechnics, 30(7), 579-697, 2003. doi:10.1016/S0266-352X(03)00064-8

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £105 +P&P)