Computational & Technology Resources
an online resource for computational,
engineering & technology publications 

CivilComp Proceedings
ISSN 17593433 CCP: 83
PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STRUCTURES TECHNOLOGY Edited by: B.H.V. Topping, G. Montero and R. Montenegro
Paper 176
A Conjugate Gradient QuasiNewton Method for Structural Optimisation K. Davey
School of Mechanical, Aerospace and Civil Engineering, The University of Manchester, United Kingdom K. Davey, "A Conjugate Gradient QuasiNewton Method for Structural Optimisation", in B.H.V. Topping, G. Montero, R. Montenegro, (Editors), "Proceedings of the Eighth International Conference on Computational Structures Technology", CivilComp Press, Stirlingshire, UK, Paper 176, 2006. doi:10.4203/ccp.83.176
Keywords: conjugate gradients, optimisation, quasiNewton, structural.
Summary
This paper is concerned with assessing the performance of a combined conjugate gradient
and quasiNewton method for a number of relatively simple nonlinear structural
optimisation problems. The problems considered are designed to be poorly conditioned as
the new method is shown to be particularly adept at solving these types of problems. The
features of the new method are quadratic termination and that matrix updating is present,
similar to the quasiNewton method. However, this latter feature is optional and if no
updating takes place the method reduces to the standard conjugate gradient method whilst if
full updating is performed the quasiNewton method is obtained. It is shown in the paper
that best performance is obtained with partial updating and the extent depends on the
illconditioning of the problem.
All of the modern iterative methods display poor performance when applied to very poor conditioned systems. This has led to the investigation into the use of selfpreconditioning and hybrid solvers (see Saad for example). Most successful modern methods are based on some form of successive orthogonalisation of Krylov type spaces combined with some minimisation step. Although these methods are excellent linear equation solvers they suffer some disadvantages when applied to nonlinear systems arising in design optimisation. A typical approach might be to combine these methods with a Newton method to give inexact Newton methods. However, the link between the linear solver and the Newton method is tenuous with little useful information being passed between the solvers. In this paper a new approach is proposed which essentially combines the preconditioned conjugate gradient method (PCGM) with the quasiNewton method. The method can be classified as an inexact Newton method but has the added feature that the preconditioner developments over linear and nonlinear iterations. The method can also be applied directly for function minimization. The method is founded on the ideas of Davey and Ward and has similarities with the method of van der Vorst and Vuik, which is founded on rankone matrix updates, and that of Axelsson and Vassilevski. The new features of the proposed method are: the linkage with quasiNewton methods, the use of ranktwo updates, and preconditioners developing over linear and nonlinear iterations. Particular focus in this paper is on the use of direct approximations to the inverse Hessian matrices that are generated by the method. A relatively simple nonlinear problem is considered here and consists of the minimisation of where and are sequences of pseudorandom number between 0 and 1, for , and between 0 and 10 for , and is a sequence of pseudorandom numbers between 0 and 1. In order to produce fully populated systems minimisation is performed with respect to where , and where the matrix is an orthogonal matrix generated as a product of individual orthogonal matrices . The results for problem (32) with and are presented in Figure 1. As the nonlinear scheme progresses, the potential of the UCGM schemes becomes clear in terms of reducing the number of linear iterations required. It is interesting to observe the ability of the UCGM to keep the number of linear iterations to convergence at a low level. This paper is concerned with the development of the updating conjugate gradient method for the minimisation of nonlinear functions that are sufficiently smooth to possess a continuous Hessian. The following conclusions can be made:
purchase the fulltext of this paper (price £20)
go to the previous paper 
