ValueError:检查输入时出错:预期lstm_1_input具有3个维,但数组的形状为(393613,50)
我在Keras中出错,找不到解决方案.我已经搜索了整个互联网,但仍然没有答案^^ 这是我的代码.
I have a error in Keras and I can't find the solution. I have searched the whole internet and I have still no answer ^^ Here is my Code.
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=X.values.shape))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32))
model.add(Dense(10, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy'])
错误是第二行.它说:"ValueError:检查输入时出错:预期lstm_1_input具有3个维,但数组的形状为(393613,50)" 我的数据框X的形状正确. 当我尝试训练模型时,会弹出错误消息
The error is the second line. It says "ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (393613, 50)" The Shape of my Dataframe X is correct. And when I try to train the model the Error pops up
model.fit(X.values, Y.values, batch_size=200, epochs=10, validation_split=0.05)
我希望有人可以帮助我:-)
I hope someone could help me :-)
顺便提一句.这是model.summary()
Btw. here is the model.summary()
lstm_1(LSTM)(无,393613,32)10624
lstm_1 (LSTM) (None, 393613, 32) 10624
lstm_2(LSTM)(无,393613,32)8320
lstm_2 (LSTM) (None, 393613, 32) 8320
lstm_3(LSTM)(无,32)8320
lstm_3 (LSTM) (None, 32) 8320
总参数:27,594 可训练的参数:27,594 不可训练的参数:0
Total params: 27,594 Trainable params: 27,594 Non-trainable params: 0
亲切的问候尼古拉斯.
在初始化第一层时,您正在传递2个值作为input_shape = X.values.shape
While initialising the first layer you are passing 2 values as input_shape =X.values.shape
keras已经希望每批的行数为NONE.在运行时,此值由batch_size=
(在您的情况下为200)
keras already expects number of rows per batch as NONE. At runtime this value is determined by batch_size=
(200 in your case)
因此,基本上,它在内部将第1层的输入形状更改为
NO_OF_FEATURES, NO_OF_ROWS_IN_DATA_SET, NO_OF_ROWS_PER_BATCH
So basically it internally changed shape of input for layer 1 as
NO_OF_FEATURES, NO_OF_ROWS_IN_DATA_SET, NO_OF_ROWS_PER_BATCH
要解决此问题,您要做的就是将1参数作为input_shape
传递,该参数没有功能. Keras已经接受NONE
作为占位符,表示每个批次中没有行.
To fix this all you have to do is pass 1 parameter as input_shape
, which is no of features. Keras already accepts NONE
as a placeholder for no of rows per batch.
所以input_shape=(X.values.shape[1],)
应该可以解决问题.
So input_shape=(X.values.shape[1],)
should do the trick.
model.add(LSTM(32, return_sequences=True, input_shape=(X.values.shape[1],)))