Neural Net Input Optimization by Kyle M. Druey
Here's an article based on the author's own research of neural networks and personal experience developing neural net trading models. These findings are more from the practical/user-oriented framework rather than from a scientific research standpoint.
The popularity of neural networks for financial market forecasting has increased significantly in recent years. The apparent ease in developing neural networks and their potential for accurate forecasts has served as an incentive for many to analyze this technology. But there's usually more than meets the eye: Most novice neural net developers are probably unaware of the time and energy required to produce effective neural net models. Neural network development requires extensive experimentation, but the results can be well worth the effort.
Many factors should be considered when a neural network is being developed. Those include, but should not be limited to, such factors as the rate of learning, types of transfer functions and the number of hidden nodes. However, many neural net developers have discovered that the most important elements of a successful neural network are data preprocessing and input optimization.
This analysis examines different methods to optimize the number of inputs for inclusion in a neural network
forecasting model. It also presents a logical alternative to indiscriminately throwing numerous inputs into the neural net to await a random result. Optimization focuses on identifying the minimum number of inputs needed to produce the lowest test set error.
Identifying appropriate inputs is not addressed in the analysis, but it is equally important. Four neural network input optimization methods are analyzed: correlation coefficient analysis, input connection weight analysis, sensitivity analysis and systematic testing. The effects of multicolinearity on neural networks and methods to eliminate it are addressed in the analysis. A new optimization method based on the T-statistic is also presented.