Welcome
TwitterFacebookGoogle

Evaluating methods for constant optimization of symbolic regression benchmark problems

Constant optimization in symbolic regression is an important task
addressed by several researchers. It has been demonstrated that continuous
optimization techniques are adequate to find good values for the constants
by minimizing the prediction error. In this paper, we evaluate several
continuous optimization methods that can be used to perform constant
optimization in symbolic regression. We have selected 14 well-known
benchmark problems and tested the performance of diverse optimization
methods in finding the expected constant values, assuming that the
correct formula has been found. The results show that Levenberg-Marquardt
presented the highest success rate among the evaluated methods, followed
by Powell’s and Nelder-Mead’s Simplex. However, two benchmark problems
were not solved, and for two other problems the Levenberg-Marquardt
was largely outperformed by Nelder-Mead Simplex in terms of success
rate. We conclude that even though a symbolic regression technique
may find the correct formula, constant optimization may fail; thus,
this may also happen during the search for a formula and may guide
the method towards the wrong solution. Also, the efficiency of LM
in finding high-quality solutions by using only a few function evaluations
could serve as inspiration for the development of better symbolic
regression methods.

 

Symbolic Regression, Genetic programming, Curve-fitting, Least-squares,
Nonlinear regression