人工神经网络进行预测

问题描述:

我一直在研究试图预测股价的reasearch论文.我在这些论文中注意到,使用以下类型的激活函数之一将激活函数应用于输出.单极乙状结肠,双极乙状结肠,Tan双曲线,径向基函数.

I have been looking at reasearch papers that attempt to predict stock price. I have noticed in these papers that the activation function is applied to the output using one of the following types of activation function. Unipolar sigmoid, Bipolar sigmoid, Tan hyperbolic, Radial basis function.

我的问题 如果将上述激活函数中的一种应用于输出,那么如何将其用于预测股票价格,即$ 103.56这样的值?因为大多数这些函数的最小值或最大值在(0,1)或(-1,1)之间.

My question If one of the above types of activation function is applied to the output then how can it be used to predict stock price i.e. a value like $103.56? Because most of these functions have min or max values between (0,1) or (-1,1).

回复bakkal 在输入为人工神经网络之前,这些输入是 根据定义的"zscore"函数归一化 MATLAB,其中减去了平均值并减去了值 除以数据的方差. 目标产出是 也根据目标函数归一化,除以 它们的最大值,请牢记上限和下限 各个激活函数的限制((单极性为((0,1) 乙状结肠,(-1,1)用于双极乙状结肠和tan双曲线函数.

Reply to bakkal Before being put as input into the ANN, the inputs were normalized according to the ‘zscore’ function defined in MATLAB, wherein the mean was subtracted and the value divided by the variance of the data. The target outputs were also normalized according to the target functions, dividing by their maximum values, keeping in mind the upper and lower limits for the respective activation functions ((0,1) for unipolar sigmoid, (-1, 1) for the bipolar sigmoid and the tan hyperbolic functions).

如下所述,如果激活功能未应用于输出,那么有人可以用粗体解释该段落,谢谢.

Hi , as mentioned below if the activation function is not applied to the output then could someone explain the paragraph in bold, thanks.

我们使用规范化将目标值映射到范围(0,1)或(-1,1)或根据激活函数所需的任何值.通常,我们还将输入值映射到接近(-1,1)的范围内.用于缩放输入值的最常用归一化方法是高斯归一化方法.如果输入向量是x,并且您正在使用numpy数组,则以下是x的高斯归一化:

We used normalization to map the target values to range (0, 1) or (-1, 1) or whatever you want according to your activation function. Generally, we also map the input values to a range near to (-1, 1). The most frequently used normalization to scale the input values is Gaussian normalization. If the input vector is x and if you are working with numpy arrays, then the following is the gaussian normalization of x:

xScaled = (x-x.mean())/(x.std())

其中mean()给出平均值,std()给出标准偏差.

where mean() gives the average and std() gives standard deviation.

另一个标准化是:

xScaled = (x-x.min())/(x.max()-x.min())

将输入矢量值缩放到(0,1)范围.

which scales the input vector values to the range (0,1).

因此,您可以使用归一化的输入和输出值来加快学习过程.您也可以参考吴德华课程,以了解发生这种情况的原因. 当您想将归一化的值缩放回其实际值时,可以使用反向归一化.例如,对于上述(0,1)归一化,反向归一化为:

So, you work with normalized input and output values in order to fasten the learning process. You can also refer to Andrew Ng course as to know why this happens. When you want to scale the normalized values back to their actual values, you can use reverse normalization. For example, for the above (0,1) normalization, the reverse normalization would be:

x = x.min() + (x.max()-x.min())*xScaled

您可以类似地获得高斯情况的逆归一化.

You can similarly obtain the reverse normalization for the Gaussian case.