TensorFlow批处理规范化实现之间有什么区别?
问题描述:
TensorFlow似乎实现了至少三个版本的批量规范化:
TensorFlow seems to implement at least 3 versions of batch normalization:
tf.nn.batch_normalization
tf.layers.batch_normalization
tf.contrib.layers.batch_norm
所有这些有不同的论据和文档。
These all have different arguments and documentation.
这些和我应该使用哪一个?
What is the difference between these, and which one should I use?
答
它们实际上是非常不同的。
They are actually very different.
-
nn.batch_normalization
执行基本操作(即简单的标准化) -
layers.batch_norma化
是一个批处理层,即它负责设置可训练的参数等。最终,它是nn.batch_normalization 的包装。 code>。除非您想自己设置变量等,否则这是您要使用的一种。
-
nn.batch_normalization
performs the basic operation (i.e. a simple normalization) -
layers.batch_normalization
is a batchnorm "layer", i.e. it takes care of setting up the trainable parameters etc. At the end of the day, it is a wrapper aroundnn.batch_normalization
. Chances are this is the one you want to use, unless you want to take care of setting up variables etc. yourself.
例如, nn.conv2d
和 layers.conv2d
之间的差异。
对于 contrib
版本,我不能肯定地说,但是在我看来,它像是一个实验版本,带有一些附加参数,在常规 层
一层。
As for the contrib
version, I can't say for sure, but it seems to me like an experimental version with some extra parameters not available in the "regular" layers
one.