Evolving Artificial Neural Networks for Nonlinear Feature Construction
We use neuroevolution to construct nonlinear transformation functions for feature construction that map points in
the original feature space to augmented pattern vectors and
improve the performance of generic classifiers. Our research
demonstrates that we can apply evolutionary algorithms to
both adapt the weights of a fully connected standard multi-layer perceptron (MLP), and optimize the topology of a generalized multi-layer perceptron (GMLP). The evaluation of
the MLPs on four commonly used data sets shows an improvement in classification accuracy ranging from 4 to 13
percentage points over the performance on the original pattern set. The GMLPs obtain a slightly better accuracy and
conserve 14% to 54% of all neurons and between 40% and
89% of all connections compared to the standard MLP.
Helmut A. Mayer
Last modified: Sep 29 2014