编程语言
首页 > 编程语言> > python – 批量培训但是在Tensorflow中测试单个数据项?

python – 批量培训但是在Tensorflow中测试单个数据项?

作者:互联网

我已经训练了一个批量大小为10的卷积神经网络.
但是在测试时,我想分别预测每个数据集的分类而不是分批预测,这给出了错误:

Assign requires shapes of both tensors to match. lhs shape= [1,3] rhs shape= [10,3]

我理解10指的是batch_size,3指的是我分类的类数.

我们不能使用批次进行培训并单独测试吗?

更新:

培训阶段:

batch_size=10
classes=3
#vlimit is some constant : same for training and testing phase
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

测试阶段:

batch_size=1
classes=3
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

解决方法:

定义占位符时,请使用:

X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')
...

相反,对于您的培训和测试阶段(实际上,您不需要在测试阶段重新定义这些阶段).同样将您的偏见定义为:

b = tf.Variable(tf.ones([classes]), name="bias")

否则,您正在为批次中的每个样品培训单独的偏差,这不是您想要的.

TensorFlow应自动沿输入的第一维展开,并将其识别为批量大小,因此对于培训,您可以批量生产10个,并且对于测试,您可以为其提供单个样品(或100个或其他批次).

标签:python,machine-learning,neural-network,tensorflow,conv-neural-network
来源: https://codeday.me/bug/20190611/1216856.html