Computational & Technology Resources
an online resource for computational,
engineering & technology publications
PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY
Edited by: M. Papadrakakis and B.H.V. Topping
An Approach to Distributed Simulations with SystemC
V. Galiano1, H. Migallón1, D. Pérez-Caparrós1, M. Martínez2 and J.A. Palomino3
1Department of Physics and Computer Architectures, University Miguel Hernández, Elche, Alicante, Spain
V. Galiano, H. Migallón, D. Pérez-Caparrós, M. Martínez, J.A. Palomino, "An Approach to Distributed Simulations with SystemC", in M. Papadrakakis, B.H.V. Topping, (Editors), "Proceedings of the Sixth International Conference on Engineering Computational Technology", Civil-Comp Press, Stirlingshire, UK, Paper 95, 2008. doi:10.4203/ccp.89.95
Keywords: SystemC, distributed systems, RTL simulations, MPI, cluster, shared memory.
SystemC  has been established as the standard platform in the design of hardware models, specially in the high-level modeling. SystemC adds to the capabilities of the C++ language, essential features in microelectronic systems modeling. These additional capabilities are concurrence, hierarchy, time handling and events handling. Also, SystemC presents important improvements in this area related to VHDL or Verilog.
An important step in microelectronic system development is the simulation of the model developed. The time cost of full simulation of new designs can be decreased using multiprocessors, and distributing this simulation. A typical discrete event simulation is composed of a chronological sequence of events, and a distributed discrete event simulation (DES) is composed by the so called logical processes (LP), that is different parts of the model tested. These LP are essentially autonomous DESs. There are two basic synchronization approaches to maintain the consistency model guaranteeing a correct interaction between LPs. Conservative methods process only those events which are deemed unable to affect other unprocessed events, while optimistic methods allow speculation and recover from any resulting constraint violations.
There are several SystemC parallelization attempts that can be sorted in two philosophies. Firstly, some authors tried to parallelize/distribute SystemC by modifying its kernel what has some advantages and serious disadvantages. The other philosophy is based on wrapping communications between LPs by means of either a parallelization library like MPI  or purely TCP/IP sockets.
We have used as a test model a dual-processor system with a shared data RAM, and has been distributed into two simulation kernels. Both simulation kernels have been spreaded across different computing nodes in a cluster. For the communication and synchronization between the two kernels two approaches have been used: the Digital Force Synchronization Library , and MPI. The MPI approach have been developed following Hamabes solution.
The distributed implementations acquire considerable performance gain when the computation load is greater than the synchronization libraries load. The best results are obtained for the shared-memory case, these results are highly dependent on the processor performance. In the distributed-memory approach the communication load penalization is more significant, achieving similar results for all three platforms.
The simulation results show that a distributed SystemC model can achieve a considerable performance gain. This gain occurs when the SystemC model reaches a certain level of computation load per signal synchronization cycle. In the simulated model, the shared-memory approach get better results than the distributed-memory. The results obtained encourage us to follow working with MPI implementation and our future work will be focused on implement a new communication library that could be used in a wider range of SystemC models.
purchase the full-text of this paper (price £20)