Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Computational Science, Engineering & Technology Series
ISSN 1759-3158
CSETS: 21
PARALLEL, DISTRIBUTED AND GRID COMPUTING FOR ENGINEERING
Edited by: B.H.V. Topping, P. Iványi
Chapter 15

Dynamic Fluid Structure Interaction in Parallel: A Challenge for Scalability

A.K. Slone, A.J. Williams, T.N. Croft and M. Cross

School of Engineering, Swansea University, United Kingdom

Full Bibliographic Reference for this chapter
A.K. Slone, A.J. Williams, T.N. Croft, M. Cross, "Dynamic Fluid Structure Interaction in Parallel: A Challenge for Scalability", in B.H.V. Topping, P. Iványi, (Editors), "Parallel, Distributed and Grid Computing for Engineering", Saxe-Coburg Publications, Stirlingshire, UK, Chapter 15, pp 329-350, 2009. doi:10.4203/csets.21.15
Keywords: dynamic fluid structure interaction, geometric conservation, parallelisation, scalability, group solvers, cantilever, extrusion.

Summary
Closely coupled dynamic interaction between fluid and structural domains (DFSI) is a significant computational challenge.

This chapter describes an approach to the parallelisation of DFSI that employs:

a)
A conventional three phase numerical procedure for Generalised Navier Stokes flow and elastic solids
b)
A single software framework [1] embedding modules for flow, dynamic structures and mesh adaptation which work from the one single mesh database
c)
A solver strategy which uses a "group" domain decomposition approach.
d)
A multi-phase mesh partitioning strategy
Parallel scalability for computational mechanics solvers and codes is fairly straightforward for problems where the load per mesh node or element is relatively uniform, but this is not the case for multi-disciplinary (MD) and closely coupled multi-physics (MP) as the compute loads are not homogeneous across the whole mesh.

DFSI presents one such class of closely coupled multi-physics problems which is very compute intensive. In this contribution a parallelisation strategy which capitalises on group solver technology, which enables only the physics active in a specific sub-domain to be solved, and a multi-phase partitioning strategy, using the mesh partitioner JOSTLE, which enables a very even load balance to be achieved. The investigation reported here has considered two problems:

a)
The cantilever case where less than 1% of the mesh elements are in the structural sub-domain and the fluid-structure coupling is strong, i.e. one fluid time step to one structural time step, which increases the compute load.
b)
The extrusion case which involves a somewhat looser fluid-structure coupling, as indicated by the 20:1 fluid:structure time steps, has a roughly equivalent number of mesh elements in the fluid and structural sub-domains.
However on geometries with similar size meshes, the order of 100,000 elements, it appears that both problems scale quite similarly and reasonably well up to 10 - 12 processors, although the results for the cantilever are slightly better. However, in both cases, after this the scalability tales off reasonably quickly, which is consistent with other cluster experiments with parallel multi-physics problems. The real question here is to what extent the parallelisation strategy might be effective for much larger problems on rather larger parallel systems with many more processors, e.g. into the hundreds. Evaluating the potential for larger test cases will form part of our future work.

References