Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 89
PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY
Edited by: M. Papadrakakis and B.H.V. Topping
Paper 88

Evaluation of Different OpenMP-Oriented Implementations for the Wave Model WAM Cycle 4.5

S. Moghimi1, M.F. Doustar2 and A. Behrens3

1Department of Civil Engineering, 2Department of Computer Engineering,
Arak University, Iran
3GKSS Research Center, Institute for Coastal Research, Hamburg, Germany

Full Bibliographic Reference for this paper
S. Moghimi, M.F. Doustar, A. Behrens, "Evaluation of Different OpenMP-Oriented Implementations for the Wave Model WAM Cycle 4.5", in M. Papadrakakis, B.H.V. Topping, (Editors), "Proceedings of the Sixth International Conference on Engineering Computational Technology", Civil-Comp Press, Stirlingshire, UK, Paper 88, 2008. doi:10.4203/ccp.89.88
Keywords: OpenMP, shared memory, parallel computing, compiler directives, speed up, wave spectrum.

Summary
Parallelizing was performed on the latest version of WAM (WAve Model Cycle 4.5) which is developed by GKSS Research Center, Germany. The performance of different OpenMP-oriented implementations has been measured and compared for an application of the model in the Caspian Sea. WAM is the pioneer of all spectral third generation wave models that solve the action density equation in four-dimensions (two spatial dimensions, wave direction and wave frequency) [3,4]. In this research different parallel algorithms using OpenMP have been implemented for the most time consuming subroutines of the model source code. This was in order to find the best solution for a decrease of the turn-around time for the Caspian Sea operational wave forecasting system besides traditional OpenMP approaches. The possibility of using an incremental approach in parallelization and different parallelization schemes provided by OpenMP makes it easy to parallelize each part of the code independently and compare the performance [2]. In contrast to message passing parallelization, it seems to be a relatively simple approach to port the sequential code to a parallel one [1]. This approach is scalable with the number of processors and can be scaled up with the power of the available machines. From the results gathered due to parallelizing IMPLCH and PROPAGS, it can be concluded that parallelizing of some subroutines with little roles in the execution time of the entire model may result no sensible speed up because of the overheads produce by creating parallel regions. But when the amount of these parallelized codes is much more than the overheads produced, their parallelization advantages will be clearer.

References
1
R. Berrendorf, G. Nieken, "Performance Characteristics for OpenMP Constructs on Different Parallel Computer Architectures", Central Institute for Applied Mathematics, Research Centre Jülich, Germany, 1999.
2
M. Sato, "OpenMP: Parallel Programming API for Shared Memory Multiprocessors and On-Chip Multiprocessors", Proceedings of the 15th international symposium on System Synthesis (ISSS '02), pp. 109-111, 2002.
3
S. Hasselmann, K. Hasselmann, "Computations and parameterizations of the linear energy transfer in a gravity wave spectrum. Part I: A new method for efficient computations of the exact nonlinear transfer integral", J. Phys. Oceanogr., 15, No. 11, 1369-1377, 1985. doi:10.1175/1520-0485(1985)015<1369:CAPOTN>2.0.CO;2
4
S. Hasselmann, K. Hasselmann, J.H. Allender, T.P. Barnett, "Computations and parameterizations of the nonlinear energy transfer in a gravity wave spectrum. Part II: Parameterizations of the nonlinear energy transfer for application in wave models", J. Phys. Oceanogr., 15, No. 11, 1378-1391, 1985. doi:10.1175/1520-0485(1985)015<1378:CAPOTN>2.0.CO;2

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £95 +P&P)