A Generic Approach for Learning Performance Assessment
Functions
This paper presents a generic machine learning
based approach to devise performance assessment functions for
any kind of optimization problem. The need of a performance
assessment process taking into account robustness of the solutions
is stressed and a general methodology for devising a function
to estimate such a performance on any given engineering
problem is formalized. This methodology is used as basis to train
machine learning models capable of assessing performance of
real world time series classification algorithms through the use
of ratings from expert engineers as training data. Although the
methodology presented is used on a time series classification
problem, it possesses generic validity and can be easily applied
to devise arbitrary scalar performance functions for complex
multi-objective problems as well. The trained machine learning
models can be understood as performance assessment functions
that, having learned the engineer's "gut instinct", are able to
assess robustness performance in a much more objective way
than a human expert could do. They represent key components
for enabling automatic, computationally intensive processes
such as multi-objective optimization or feature selection.
Helmut A. Mayer
Last modified: Jul 23 2010