Squashing Functions by Clifford J. Sherry, Ph.D.
Here's one way to mathematically transform data from one scale to another while maintaining the information content.
Those who use neural nets to model equity markets to develop a trading strategy probably use a squashing
function on the individual inputs to your net. A squashing function preprocesses the data. Typical data, such as stock prices, begin with a positive number and have no upside limit. But neural networks work best with numbers that have a range. Often, one of two sigmoidal functions are used to change data from having an unlimited range to having a limited range. They have the following form:
f (x) = (e x - e -x )/(e x + e -x ); and f (x ) = 1/(1 + e -x )
where e is the base of the natural logs (that is, 2.71828). These functions vary from -1 to +1 or zero to +1, respectively. The behavior of these two functions using arbitrary numbers can be seen in Figures 1 and 2. On examination, note that the minima for the first function is near -6.4 and the maxima is near +8, while the minima for the second function is near -16 and the maxima is near +16.9. This means that in the case of the first sigmoid function, the variability of any of your data outside the range of -6.4 to +8 will be lost because the function will be reporting either -1 or +1, respectively. For the second sigmoid function, you will lose all the variability of the data that lies outside -16 to +16.9.