![]() |
Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
Civil-Comp Conferences
ISSN 2753-3239 CCC: 12
PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, GPU AND CLOUD COMPUTING FOR ENGINEERING Edited by: P. Iványi, J. Kruis and B.H.V. Topping
Paper 2.2
Parallel Application of Multi-Freedom Constraints Using Master-Slave Method in Sparse Linear Systems C. Topal1, N. Muhtaroglu2 and G. Kiziltas1
1Mechatronics Engineering, Sabanci University, Istanbul, Turkey
Full Bibliographic Reference for this paper
C. Topal, N. Muhtaroglu, G. Kiziltas, "Parallel Application of Multi-Freedom Constraints Using Master-Slave Method in Sparse Linear Systems", in P. Iványi, J. Kruis, B.H.V. Topping, (Editors), "Proceedings of the Eighth International Conference on
Parallel, Distributed, GPU and Cloud Computing for Engineering", Civil-Comp Press, Edinburgh, UK,
Online volume: CCC 12, Paper 2.2, 2025,
Keywords: linear equations, sparse matrices, multi-freedom constraints, parallel computation, MPI, PETSc.
Abstract
Multi-freedom constraints (MFCs) are commonly used in matrix formulations to enforce dependencies among multiple components, particularly in structural analysis where they are defined based on the degrees of freedom (DOFs) at nodes or computation points. A widely used approach for implementing MFCs is the master-slave elimination method, favored for its simplicity and its ability to reduce the number of unknowns. While straightforward to implement with full matrix storage, this approach can lead to increased memory usage. Conversely, applying it to sparse matrices presents added complexity. This paper introduces a scalable and reusable implementation of the master-slave method tailored for large-scale linear systems with multiple non-homogeneous constraints. The approach leverages parallel programming and distributed processing to efficiently handle computational demands. PETSc 3.23.1 is used as a tool for parallel computing due to its higher-level encapsulation of MPI operations and built-in sparse matrix representation methods. The algorithm's syntax is designed for efficient memory handling. Parallel MPI library enables load balancing across processors and threads.Benchmark results show that the proposed algorithm speeds up process solving linear systems with multi-freedom constraints.
download the full-text of this paper (PDF, 9 pages, 725 Kb)
go to the previous paper |
|