Computational & Technology Resources
an online resource for computational,
engineering & technology publications 

CivilComp Proceedings
ISSN 17593433 CCP: 86
PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON CIVIL, STRUCTURAL AND ENVIRONMENTAL ENGINEERING COMPUTING Edited by: B.H.V. Topping
Paper 94
An Adaptive Response Surface Approach for Structural Reliability Analyses based on Support Vector Machines T. Most
Institute of Structural Mechanics, BauhausUniversity Weimar, Germany T. Most, "An Adaptive Response Surface Approach for Structural Reliability Analyses based on Support Vector Machines", in B.H.V. Topping, (Editor), "Proceedings of the Eleventh International Conference on Civil, Structural and Environmental Engineering Computing", CivilComp Press, Stirlingshire, UK, Paper 94, 2007. doi:10.4203/ccp.86.94
Keywords: reliability, response surface, support vector machines, adaptivity.
Summary
In structural design the consideration of uncertainties becomes more and more important.
Generally the application of a reliability analysis is very complicated due to the required large number of
simulations, where each corresponds to a realization of the random material, geometry or loading properties.
For this reason many approximation methods have been developed, which allow a reliability analysis with a smaller number of
samples. First order and second order reliability methods (FORM and SORM) are two of this methods, which assume
the existence of only one design point and do linear or higher order approximation around the design point.
Another wellknown method is the response surface method, where the true limit state function is replaced by an approximation function.
Early methods have used a global polynomial approximation.
Later local approximation schemes
such as Moving Least Squares, Kriging, radial basis functions and sophisticated global methods such as artificial neural networks have been applied.
But the generalization of these methods for higher numbers of random variables is still very difficult.
Mainly the number of required support points for the approximation increases dramatically with increasing dimension.
The basic idea of utilizing the response surface method is to replace the true limit state function by an approximation, the so called response surface, whose function values can be computed more easily. This requires generally a smooth limit state function. In this work, we want to approximate the indicator function, which has the advantage, that for every case the function values can be determined. Due to the fact, that the indicator function has values, which are only one and zero (failure and safe domain) we only have to classify our samples in two classes. A very efficient tool for classification purposes are Support Vector Machines (SVM), which is a method from the statistical learning theory. The algorithmic principle is to create a hyperplane, which separates the data into two classes by using the maximum margin principle. The SVM method is suitable in combination with Monte Carlo Simulation. The training data are generated by stretched Latin Hypercube Sampling, which leads to an almost regular distribution for a given number of training points. Based on the initial setup of training points an adaptive scheme is very promising. We introduce stepwisely new training points at these MCS samples, which are the points in the margin of the support vector machine with the shortest distance to the trace of the SVM function. After each adaptation step the SVM is trained again with the new point. In order to obtain a regular training point distribution along the real classification boundary, we combine this approach with the potential energy corresponding to the existing training points. Regions with small potential energy are far away from existing training points. This adaptive method can be applied for smooth and discontinous limit state functions with single and multiple design points. For sets of correlated random variables with nonGaussian distribution types the approximation scheme is applied in the uncorrelated standard Gaussian space and the set of random variables is transformed by using the Nataf model. In the paper it will be shown that the algorithm converges very fast to an accurate solution of the failure probability, whereby only a small number of samples really have to be calculated. The new samples obtained from the algorithm will be found mainly along the boundary between the failure and safe domains. With an increasing number of random variables the number of required training data increases only linearly, which enables the investigation of problems with 50, 100 or more random variables with only a moderate number of samples.
purchase the fulltext of this paper (price £20)
go to the previous paper 
