Computational & Technology Resources
an online resource for computational,
engineering & technology publications
PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STRUCTURES TECHNOLOGY
Edited by: B.H.V. Topping, G. Montero and R. Montenegro
Plastic Design of Frames Using Heuristic Algorithms
A. Kaveh and M. Jahanshahi
Department of Civil Engineering, Iran University of Science and Technology, Narmak, Tehran, Iran
A. Kaveh, M. Jahanshahi, "Plastic Design of Frames Using Heuristic Algorithms", in B.H.V. Topping, G. Montero, R. Montenegro, (Editors), "Proceedings of the Eighth International Conference on Computational Structures Technology", Civil-Comp Press, Stirlingshire, UK, Paper 108, 2006. doi:10.4203/ccp.83.108
Keywords: plastic analysis and design, collapse load factor, genetic algorithm, ant colony, frame, work.
The minimum and maximum principles are the basis of nearly all the analytical methods used for plastic analysis and design of frames . The most frequently used method based on the minimum principle is the combination of elementary mechanisms, first developed by Neal and Symonds [2,3].
The problem of plastic analysis and design of frames with rigid joints has been solved in the form of a linear programming by Charnes and Greenberg , as early as 1951. Further progress in this field is attributed to Heyman, Horne, Baker and Heyman, Jennings, Watwood, Gorman, Thierauf , Kaveh , and Kaveh and Khanlari , among others. Considerable progress has been made in the past decade: a complete reference can be found in Munro  and Livesley . Plastic analysis and design of frames using a combination of elementary mechanisms has limitations which prevent it being used as a common analysis tool. Among these limitations, the extensive numbers of mechanisms which have to be considered and the tedious work of combining them to find the true mechanism are noteworthy. There is also the possibility that the assumed collapse mechanism for a given frame and the corresponding loading is not the correct one and hence the computed collapse load factor will be only an upper bound to the actual collapse load factor. Considering these problems it is important to accomplish an algorithm which has the capability of finding the collapse load factor and the corresponding mechanism as fast and accurate as possible.
In the recent years heuristic algorithms such as the genetic algorithm, ant colony and simulated annealing have found many applications in optimization problems. The essence of these algorithms lies in the fact that they do not depend on the specific search space to which they are applied and consequently this extends their generality. In this work, genetic and ant colony algorithms  are used to find the collapse load factor of two dimensional frames and their accuracy are compared. It is observed that if these algorithms are finely tuned and their parameters adjusted carefully, good results will be obtained.
It is observed that both the genetic algorithm and the ant colony algorithm computes reasonable load factors for frame examples presented. However, the collapse mechanism might considerably deviate from the actual one. This is more pronounced for genetic algorithms and the mechanisms found by the ant colony algorithm have better conformity. These observations apply to the specific parameters used in this work and one can obtain different results.
Another point, which should be considered in comparing the genetic algorithm with ant colony algorithm, is time. The genetic algorithm usually accomplishes the task of finding collapse mechanisms much faster than the ant colony algorithm, although the parameters used can considerably influence this speed. Therefore it can be suggested that when the collapse load factor is more important than the collapse mechanism, the genetic algorithm be used. However if time is not a factor and it is desired to find better approximation to the actual collapse mechanism, it is best to use ant colony algorithms.
There is also a performance point about ant colony algorithms, which is noteworthy to be mentioned. New sub optimal mechanisms obtained in each iteration expand the graph up to twice the original size. This is not an obligation and one can put constraints on the size of the graph. It is obvious that if the size is three, four... times the original size, the space in which the actual mechanism is searched, becomes larger and there is a great probability that the actual mechanism will reside in this space. However it is important to have regard to the time being consumed for searching such a large space. The compromise problem between time and speed is again manifest.
purchase the full-text of this paper (price £20)