如何将输入一起批处理以进行张量流?
我正在尝试将正在处理的神经网络的输入汇总在一起,以便像tensorflow MNIST教程中那样将它们输入到tensorflow中.但是,无论如何我都找不到这样做,本教程也没有涉及.
I'm trying to batch together the inputs for a neural network I'm working on so I can feed them into tensorflow like in the tensorflow MNIST tutorial. However I can't find anyway of doing this and it isn't covered in the tutorial.
input = tf.placeholder(tf.float32, [10, 10])
...
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
inputs = #A list containing 50 of the inputs
sess.run(accuracy, feed_dict={input: inputs})
这将引发以下错误:
ValueError: Cannot feed value of shape (50, 10, 10) for Tensor'Placeholder:0', which has shape '(10, 10)'
我知道为什么会出现上述错误,我只是不知道如何获取张量流以将我的输入视为一批输入,而不是认为我试图将其全部输入为一种形状.
I understand that why I'm getting the above error, I just don't know how to get tensorflow to treat my inputs as a batch of inputs rather than think I'm trying to feed it all in as one shape.
非常感谢您的帮助!
您需要修改占位符的签名.让我们来分解错误消息:
You need to modify the signature of your placeholder. Let's break down the error message:
ValueError: Cannot feed value of shape (50, 10, 10) for
Tensor'Placeholder:0', which has shape '(10, 10)'
您的inputs
变量是形状为(50, 10, 10)
的变量,这意味着50
形状为(10, 10)
的元素,而Tensor Placeholder:0
是您的input
变量.如果打印(input.name,您将获得值Placeholder:0
.
无法输入值表示无法将inputs
分配给input
.
your inputs
variable is the one that has shape (50, 10, 10)
that means 50
elements of shape (10, 10)
and Tensor Placeholder:0
is your input
variable. If you print (input.name you will get the value Placeholder:0
.
Cannot feed value means that it cannot assign inputs
to input
.
第一个快速解决方案是将占位符input
的形状固定为
A first quick solution is to fix the shape of the placeholder input
to
input = tf.placeholder(tf.float32, [50, 10, 10])
,但是每次您想要修改批次的大小时,都需要在输入中更新批次的大小.
指定批处理大小的更好方法是使用None
:
but each time you want to modify the size of the batch you will need to update the batch size in your input.
A better way to specify the batch size is to put a undefined shape dimension for the batch size using None
:
input = tf.placeholder(tf.float32, [None, 10, 10])
现在,它可以处理任何批量,从1
到您的体系结构的硬件限制.
This now will works with any batch size, from 1
to the hardware limits of your architecture.