Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Computational Technology Reviews
ISSN 2044-8430
Computational Technology Reviews
Volume 4, 2011
Current and Future Trends in the Use of Artificial Neural Networks in Engineering
I. Flood

Rinker School, College of Design, Construction, and Planning, University of Florida, Gainesville, United States of America

Full Bibliographic Reference for this paper
I. Flood, "Current and Future Trends in the Use of Artificial Neural Networks in Engineering", Computational Technology Reviews, vol. 4, pp. 93-114, 2011. doi:10.4203/ctr.4.4
Keywords: artificial embryogenesis, artificial intelligence, artificial neural networks, evolutionary computation, growth algorithms, massive neural networks, multi-stage objective functions, practical engineering problems, richly structured networks.

Summary
Artificial neural networks have gone through several cycles of development and popularity since their inception in the 1950s. However, as far as solving practical problems is concerned they appear to have stalled just beyond the complexity of those that can be solved by non-linear regression.

One of the main stumbling blocks has been the fact that the number of examples required to train a network tends to increase geometrically with the number of independent variables used to describe the problem. Experience indicates that five or six independent variables are about the limit of this approach, unless the variables are strongly correlated. A second main limitation has been the rigid format required for the inputs to (and outputs from) a network. There is little room for variation between examples of a problem in the number, position, timing or amplitude scales of the inputs (or outputs). Yet, there are many practical problems in engineering that cannot be described adequately using such a fixed format.

These limitations are not shared by biological neural networks. The question becomes, therefore, how does the brain handle these sorts of problems? While there are no definitive answers to this question, it is evident that greater size and internal structuring are key features of any neural network employed to solve anything other than relatively trivial problems. Given the lack of theory, it is suggested that an evolutionary computation approach be adopted. The challenge then becomes to find a coding system that enables the evolution of appropriate neural structures, and that it does so efficiently for networks comprising thousands or even millions of neurons. Possible ways of facilitating the computational evolution of massive richly structured neural networks are: growth algorithms; self-organizing structures; and multi-stage objective functions.

Research is, of course, a long way from being able to replicate human cognitive skills using artificial neural networks, but there are significant advances that could be made in the shorter term solving a range of nontrivial engineering problems. This paper explores issues with reference to the problem of determining a truck's loading attributes from the dynamic strain response it induces in a bridge.

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to Computational Technology Reviews
purchase this volume (price £80 +P&P)