Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Computational Science, Engineering & Technology Series
ISSN 1759-3158
Edited by: B.H.V. Topping, J.M. Adam, F.J. Pallarés, R. Bru and M.L. Romero
Chapter 12

High Performance Computing on Low Cost Computers: A Review of Parallel and Distributed Computing Methodologies for Finite Element Analysis

R.I. Mackie

Civil Engineering, School of Engineering, Physics and Mathematics, University of Dundee, United Kingdom

Full Bibliographic Reference for this chapter
R.I. Mackie, "High Performance Computing on Low Cost Computers: A Review of Parallel and Distributed Computing Methodologies for Finite Element Analysis", in B.H.V. Topping, J.M. Adam, F.J. Pallarés, R. Bru and M.L. Romero, (Editors), "Developments and Applications in Engineering Computational Technology", Saxe-Coburg Publications, Stirlingshire, UK, Chapter 12, pp 263-283, 2010. doi:10.4203/csets.26.12
Keywords: distributed computing, finite element analysis, object-oriented, parallel processing, component oriented.

Parallel computing has long been an important part of engineering computing, but until recently was a rather specialised area limited to supercomputers and clusters of high specification workstations. This branch of computing was, and still is, often referred to as high performance computing (HPC). Nowadays, while there is still a definite place for supercomputing, however techniques once the preserve of HPC are increasingly relevant for computing on standard desktop and laptop computers. It is well known that the power of desktop computers has increased tremendously over the years. In recent years the speed increase of individual processors has slowed down, but virtually all computers now are multi-core computers, typically dual, triple or quad core. Therefore the typical computer today is capable of parallel processing. Furthermore, all computers are linked together via the internet, with local linking via networks or wireless computing. This change in architecture requires a change in programming model if the full potential is to be realised. Therefore distributed and parallel computing is no longer the preserve of specialist machines, but is relevant to all computing.

This paper reviews the software engineering side of realising the potential available to today's computers, with particular focus on the application to finite element analysis. The paper considers three main software technologies for implementing parallelism: MPI, .NET and Java. These are looked at from several angles, in particular in terms of programming models and the influence of these in terms of program design. Attention is given more generally to the role of object and component oriented program design methods.

Consideration is given to the mathematical methods used, with special emphasis on the use of domain decomposition methods. The interplay between software engineering, mathematical methods used, and user interaction is examined. There is particular focus on the role of object and component oriented programming methods, and how they facilitate the use of domain decomposition methods and integrating these with machine architectures and implementation of user-interaction features. Some work on the forthcoming Task Parallel Library (TPL) for .NET is presented.

Speed comparisons are presented for C++ and MPI, .NET and Java. Overall C++ with MPI is faster than .NET or Java, but the difference is not great enough to rule out the use of .NET or Java.

purchase the full-text of this chapter (price £20)

go to the previous chapter
go to the next chapter
return to the table of contents
return to the book description
purchase this book (price £98 +P&P)