Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 96
PROCEEDINGS OF THE THIRTEENTH INTERNATIONAL CONFERENCE ON CIVIL, STRUCTURAL AND ENVIRONMENTAL ENGINEERING COMPUTING
Edited by: B.H.V. Topping and Y. Tsompanakis
Paper 120

Dynamic Analysis of Structures on Multicore Computers: Achieving Efficiency through Object Oriented Design

R.I. Mackie

Civil Engineering, School of Engineering, Physics and Mathematics, University of Dundee, United Kingdom

Full Bibliographic Reference for this paper
R.I. Mackie, "Dynamic Analysis of Structures on Multicore Computers: Achieving Efficiency through Object Oriented Design", in B.H.V. Topping, Y. Tsompanakis, (Editors), "Proceedings of the Thirteenth International Conference on Civil, Structural and Environmental Engineering Computing", Civil-Comp Press, Stirlingshire, UK, Paper 120, 2011. doi:10.4203/ccp.96.120
Keywords: component-oriented, object-oriented, eigenproblems, modal analysis, seismic analysis.

Summary
The paper examines software design aspects of implementing parallel and distributed computing for transient structural problems in order to take advantage of the capabilities offered by multi-core and distributed computers. This is done within the context of seismic analysis of space trusses, but the methods are much more widely applicable. Overall design is achieved using object and component oriented methods. The ideas are implemented using .NET and the Task Parallel Library (TPL).

Parallelisation and distribution is applied both to single problems, and to solving multiple problems. For single problems the Hilber-Hughers-Taylor algorithm is used. Following a design pattern previously used for iterative equation solvers and eigenproblems, interfaces are used to logically separate the algorithm from the data structure. Both serial and parallel versions were implemented, parallelism being achieved via the domain decomposition approach. TPL was used to implement the parallelism.

In many situations, such as stochastic modelling and sensitivity analysis, it is necessary to solve several problems. This can be carried out in parallel on a single machine, but is especially suitable for making use of networks of computers. The solvers used in the current software were designed as objects. This meant that there is a separate solver object for each data set, and so data integrity is easily maintained. The factory object design pattern was used, and the factories created the desired solvers. For the case where distributed solution was used, the solvers were created on remote computers, and the factory object took care of managing these remote computers. So the main client was isolated from the details.

The current computer architectures available greatly increase the possibilities for interaction. This does introduce more complexity, and necessitates the need for proper co-ordination between tasks and program control. This was implemented using an event driven approach, something which is usefully facilitated by the .NET BackgroundWorker class. TPL also has facilities that can be used to handle the situation where the client needs to cancel ongoing tasks.

The software was implemented on a variety of computers, including quad-core computers, and a cluster of dual core machines. Reasonable speed-up was achieved, particularly for solving multiple problems on clusters of computers.

Overall, it is concluded that modern software technologies can, and should, be used to design better scientific software, and can enable advantage to be taken of the computing capabilities available today.

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £130 +P&P)