其他分享
首页 > 其他分享> > 手把手入门:tensorflow案例实战MNIST(内附源代码和数据集)

手把手入门:tensorflow案例实战MNIST(内附源代码和数据集)

作者:互联网

tensorflow一个简单的例子

tensorflow一个简单的例子:定义变量w和x,计算wx的值。跟numpy定义变量一样很相似,不同点在于:使用tf的时候,需要进行全局变量初始化操作。tf的所有操作都是在session中进行的,可以理解为tf首先定义了一个画板,然后所有的操作类似于在画板中画画一样。

import tensorflow as tf
w = tf.Variable([[0.5,1.0]])
x = tf.Variable([[2.0],
                 [1.0]])
y = tf.matmul(w,x)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init_op)
    print(sess.run(y))
[[2.]]

numpy转换为tensor

既然tensor跟numpy定义变量的形式很像,那么我们是不是可以通过某种函数将numpy转换为tensor呢?
答案是yes啦,我们可以使用tf中的convert_to_tensor函数来进行转换,示例如下:

import numpy as np
s = np.zeros((9,9))
s_tensor = tf.convert_to_tensor(s)
with tf.Session() as sess:
    print(sess.run(s_tensor))
[[0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0.]]

placeholder

之前我们设置了一个画板session,在里面创建了w和x。placeholder函数则是在画板中先预占一个位置,可以在之后对其进行赋值。feed_dict以字典的方式对变量在run的时候进行赋值。

input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.multiply(input1,input2)
with tf.Session() as sess:
    print(sess.run(output,feed_dict={input1:[7.99],input2:[3.2]})) 
[25.567999]

用tensorflow实现一个线性函数

1、制作数据集

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
# 随机生成点
num_points = 500
vectors_set = []
for i in range(num_points):
    x1 = np.random.normal(0,0.5)  # 随机高斯随机化
    y1 = x1 * 0.2 + 0.5 + np.random.normal(0,0.05) # 按照高斯随机产生噪点
    vectors_set.append([x1,y1])
# 生成一些样本点
x_data = [x[0] for x in vectors_set]
y_data = [x[1] for x in vectors_set]
# 画散点图,查看数据点分布情况
plt.scatter(x_data,y_data,c='r')
plt.show()

在这里插入图片描述

模型训练

计算误差,并使用梯度下降的方法进行训练。

# 初始化w和b
# 生成1维的w矩阵,取值是-1到1之间的随机数
W = tf.Variable(tf.random_uniform([1],-1,1),name='W')
#生成1维的b矩阵,初始值是0 + 0.01
b = tf.Variable(tf.zeros([1]),name='b') + 0.01

# 计算预估的y值
y = W*x_data + b

# 计算真实值和预估值之间的误差
loss = tf.reduce_mean(tf.square(y-y_data),name='loss')
# 采用梯度下降法来优化参数
optimizer = tf.train.GradientDescentOptimizer(0.1)
# 训练的过程是最小化误差值
train = optimizer.minimize(loss,name="train")

# 对变量进行初始化
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    # 执行300次训练,每隔50步打印一次训练情况
    for step in range(200):
        sess.run(train)
        if step%50 == 0:
            print(step, "W=",sess.run(W),"b=",sess.run(b),"loss=",sess.run(loss))

0 W= [0.15018867] b= [0.10818303] loss= 0.15713301
50 W= [0.1915173] b= [0.5012479] loss= 0.0024183749
100 W= [0.19657716] b= [0.5013066] loss= 0.0024114968
150 W= [0.19714049] b= [0.5013125] loss= 0.002411412

Mnist

1、数据集的下载

# 从input_data类中加载数据集,如果有需要请私聊
import input_data
print("packs loading")
print("Download and Extract MNIST dataset")
mnist = input_data.read_data_sets('data/',one_hot=True)
print("data sets:")
print("type of mnist is %s"%(type(mnist)))
print("number of train data is %d"%(mnist.train.num_examples))
print("number of test data is %d"%(mnist.test.num_examples))
packs loading
Download and Extract MNIST dataset
WARNING:tensorflow:From C:\Users\flinjin\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From C:\Users\flinjin\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:252: _internal_retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please use urllib or similar directly.
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
WARNING:tensorflow:From C:\Users\flinjin\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
WARNING:tensorflow:From C:\Users\flinjin\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:\Users\flinjin\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:\Users\flinjin\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
data sets:
type of mnist is <class 'tensorflow.contrib.learn.python.learn.datasets.base.Datasets'>
number of train data is 55000
number of test data is 10000

查看MNIST数据集

数据集大小
train_img = mnist.train.images
train_label = mnist.train.labels
test_img = mnist.test.images
test_label = mnist.test.labels
print("shape of train_img is %s"%(train_img.shape,))
print("shape of train_img is %s"%(train_label.shape,))
shape of train_img is (55000, 784)
shape of train_img is (55000, 10)
训练集长什么样子
sample_n = 5
rand_index = np.random.randint(train_img.shape[0],size=sample_n)

for i in rand_index:
    cur_img = np.reshape(train_img[i,:],(28,28))
    cur_label = np.argmax(train_label[i,:])
    plt.matshow(cur_img,cmap=plt.get_cmap('gray'))
    plt.show()

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

# 分批取数据
batch_size = 100
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
print("type of batch_xs is %s"%(type(batch_xs)))
print("shape of batch_xs is %s"%(batch_xs.shape,))
type of batch_xs is <class 'numpy.ndarray'>
shape of batch_xs is (100, 784)

使用逻辑回归进行预测

x = tf.placeholder("float",[None,784])
y = tf.placeholder("float",[None,10])
W = tf.Variable(tf.zeros([784,10])) + 0.1 # 输入784个像素点,输出10分类,高斯初始化更好一点
b = tf.Variable(tf.zeros([10])) + 0.1
# 逻辑回归
actv = tf.nn.softmax(tf.matmul(x,W)+b)
# 损失函数
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(actv),reduction_indices=1))
# optimizer
learning_rate = 0.01
optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# prediction
pred = tf.equal(tf.argmax(actv,1),tf.argmax(y,1))
# accuracy
acc = tf.reduce_mean(tf.cast(pred,"float"))


# initializer
init = tf.global_variables_initializer()

training_epochs = 50 # 迭代次数
batch_size = 100 # 每次迭代选择样本个数
display_step = 5 # 间隔多少次进行展示
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(training_epochs): # 迭代次数
        avg_cost = 0  # 初始化损失值
        num_batch = int(mnist.train.num_examples/batch_size) # 一共要选择多少次
        for i in range(num_batch): # 进行num_batch次批训练,累加
            batch_xs, batch_ys = mnist.train.next_batch(batch_size) # 选择batch_size进行计算
            sess.run(optm,feed_dict={x:batch_xs,y:batch_ys})
            feeds = {x:batch_xs,y:batch_ys}
            avg_cost += sess.run(cost,feed_dict=feeds)/num_batch
        if epoch%display_step == 0:
            feeds_train = {x:batch_xs,y:batch_ys}
            feeds_test = {x:mnist.test.images,y:mnist.test.labels}
            train_acc = sess.run(acc,feed_dict=feeds_train)
            test_acc = sess.run(acc,feed_dict=feeds_test)
            print("Epoch: %03d/%03d cost:%.9f train_acc:%.3f test_acc:%.3f"%
                 (epoch,training_epochs,avg_cost,train_acc,test_acc))
# end
Epoch: 000/050 cost:1.176545924 train_acc:0.860 test_acc:0.852
Epoch: 005/050 cost:0.440682076 train_acc:0.960 test_acc:0.895
Epoch: 010/050 cost:0.383195362 train_acc:0.890 test_acc:0.904
Epoch: 015/050 cost:0.357568130 train_acc:0.900 test_acc:0.909
Epoch: 020/050 cost:0.341803956 train_acc:0.900 test_acc:0.912
Epoch: 025/050 cost:0.330745812 train_acc:0.930 test_acc:0.914
Epoch: 030/050 cost:0.322441776 train_acc:0.960 test_acc:0.915
Epoch: 035/050 cost:0.315869373 train_acc:0.930 test_acc:0.917
Epoch: 040/050 cost:0.310501200 train_acc:0.900 test_acc:0.918
Epoch: 045/050 cost:0.306271865 train_acc:0.890 test_acc:0.918

尝试两层神经网络来实现数字识别

mnist = input_data.read_data_sets("data/",one_hot=True)  # 加载数据集

# 神经网络相关维度
n_hidden_1 = 256
n_hidden_2 = 128
n_input = 784
n_classes = 10
# inputs and outputs
x = tf.placeholder('float32',[None,n_input])
y = tf.placeholder('float32',[None,n_classes])
# 神经网络层参数
stddev = 0.1
# 采用高斯初始化,一般不进行全零初始化

weights = {   
    'w1':tf.Variable(tf.random_normal([n_input,n_hidden_1],stddev=stddev)),
    'out':tf.Variable(tf.random_normal([n_hidden_2,n_classes],stddev=stddev)),
    'w2':tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2],stddev=stddev))
}
biases = {      # 进行一次高斯初值化
    'b1':tf.Variable(tf.random_normal([n_hidden_1])),
    'b2':tf.Variable(tf.random_normal([n_hidden_2])),
    'out':tf.Variable(tf.random_normal([n_classes]))
}
print("NETWORK READY OK!")



Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
NETWORK READY OK!
# 定义神经网络反向传播的过程

# prediction
pred = multilayer_perceptron(x,weights,biases)

# loss and optimizer
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = pred,labels = y))  # 计算交叉熵函数然后求平均值
optm = tf.train.GradientDescentOptimizer(learning_rate = 0.001).minimize(loss)
corr = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
acc = tf.reduce_mean(tf.cast(corr,'float')) # 将准确度转化为float类型

# 初始化
init = tf.global_variables_initializer()
print("FUNCTIONS READY OK!")

# 开始训练
training_epochs = 50
batch_size = 200
display = 5
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples/batch_size)
        # start iteration
        for i in range(total_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            feeds = {x:batch_xs,y:batch_ys}
            sess.run(optm,feed_dict=feeds)
            avg_cost += sess.run(loss,feed_dict=feeds)
        # 进行display
        if epoch % display_step == 0:
            print("Epoch: %03d/%03d cost: %.9f "%(epoch,training_epochs,avg_cost))
            feeds = {x:batch_xs,y:batch_ys}
            train_acc = sess.run(acc,feed_dict = feeds)
            print("train accuracy:%.3f"%(train_acc))
            feeds = {x:mnist.test.images,y:mnist.test.labels}
            test_acc = sess.run(acc,feed_dict = feeds)
            print("test accuracy:%.3f"%(test_acc))
print("Optimization finished!!!")
FUNCTIONS READY OK!
Epoch: 000/050 cost: 663.242111444 
train accuracy:0.105
test accuracy:0.085
Epoch: 005/050 cost: 628.588456154 
train accuracy:0.150
test accuracy:0.136
Epoch: 010/050 cost: 622.894744396 
train accuracy:0.195
test accuracy:0.179
Epoch: 015/050 cost: 617.043195724 
train accuracy:0.210
test accuracy:0.278
Epoch: 020/050 cost: 610.888449907 
train accuracy:0.415
test accuracy:0.366
Epoch: 025/050 cost: 604.296731710 
train accuracy:0.460
test accuracy:0.420
Epoch: 030/050 cost: 597.135390282 
train accuracy:0.440
test accuracy:0.472
Epoch: 035/050 cost: 589.257553577 
train accuracy:0.560
test accuracy:0.514
Epoch: 040/050 cost: 580.517964363 
train accuracy:0.520
test accuracy:0.545
Epoch: 045/050 cost: 570.789351702 
train accuracy:0.575
test accuracy:0.571
Optimization finished!!!

卷积神经网络CNN

data sets:n784 => cov1: filter = 331 (64个filter) | pool1: 22 max
=> cov2 : filter = 3364 (128个filter) | pooling:2*2 max
=> full connect:
fc1: 1024
fc2: 10

# 参数初始化
n_input = 784
n_output = 10
weights = {
    'wc1':tf.Variable(tf.random_normal([3,3,1,64],stddev = 0.1)), # 64个3*3*1的filter:宽*高*深*想得到的特征图个数
    'wc2':tf.Variable(tf.random_normal([3,3,64,128],stddev = 0.1)),
    'wd1':tf.Variable(tf.random_normal([7*7*128,1024],stddev = 0.1)), # 第一层全连接层7*7*128,转换得到的1024可以自己定义
    'wd2':tf.Variable(tf.random_normal([1024,n_output],stddev = 0.1))
}
biases = {
    'bc1':tf.Variable(tf.random_normal([64],stddev = 0.1)), # 64个3*3*1的filter
    'bc2':tf.Variable(tf.random_normal([128],stddev = 0.1)),
    'bd1':tf.Variable(tf.random_normal([1024],stddev = 0.1)),
    'bd2':tf.Variable(tf.random_normal([n_output],stddev = 0.1))
}
# tf数据格式【n,h,w,d】n是样本个数,h是图像高度,w是图像宽度,d是图像深度
def conv_basic(_input,_w,_b,_keepratio):
    # input 
    _input_r = tf.reshape(_input,shape=[-1,28,28,1])
    # 第一层卷积
    _conv1 = tf.nn.conv2d(_input_r,_w['wc1'],strides=[1,1,1,1],padding='SAME')  # 卷积
    _conv1 = tf.nn.relu(tf.nn.bias_add(_conv1,_b['bc1'])) # 激活函数
    _pool1 = tf.nn.max_pool(_conv1,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
    _pool1_drop1 = tf.nn.dropout(_pool1,_keepratio)  # dropout
    
    # 第二层卷积
    _conv2 = tf.nn.conv2d(_pool1_drop1,_w['wc2'],strides=[1,1,1,1],padding='SAME')  # 卷积
    _conv2 = tf.nn.relu(tf.nn.bias_add(_conv2,_b['bc2'])) # 激活函数
    _pool2 = tf.nn.max_pool(_conv2,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
    _pool2_drop2 = tf.nn.dropout(_pool2,_keepratio)  # dropout
    
    # 在进行全连接层之前进行reshape
    _dense1 = tf.reshape(_pool2_drop2,[-1,_w['wd1'].get_shape().as_list()[0]]) #get_shape将7*7*128改成list
    
    # 全连接层1
    _fc1 = tf.nn.relu(tf.add(tf.matmul(_dense1,_w['wd1']),_b['bd1']))
    _fc_drop1 = tf.nn.dropout(_fc1,_keepratio)
    
    # 全连接层2
    _out = tf.add(tf.matmul(_fc_drop1,_w['wd2']),_b['bd2'])
    
    out = {
        'input_r':_input_r,'conv1':_conv1,'pool1':_pool1,'pool1_dr1':_pool1_drop1,
        'conv2':_conv2,'pool2':_pool2,'pool_dr2':_pool2_drop2,'dense1':_dense1,
        'fc1':_fc1,'fc_dr1':_fc_drop1,'out':_out
    }
    return out
print("CNN is OK!!!")
    
CNN is OK!!!
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
keepratio = tf.placeholder(tf.float32)

# FUNCTIONS

_pred = conv_basic(x, weights, biases, keepratio)['out']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = _pred,labels = y))
optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
_corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) 
accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) 
init = tf.global_variables_initializer()
    
# SAVER
save_step = 1 # 多少步保存一次
saver = tf.train.Saver(max_to_keep = 3) # max_to_keep 
print ("GRAPH READY")

do_train = 1 # 在训练的时候表示训练还是不训练
sess = tf.Session()
sess.run(init)
if do_train == 1:
    training_epochs = 50
    batch_size      = 32
    display_step    = 1
    for epoch in range(training_epochs):
        avg_cost = 0.
        #total_batch = int(mnist.train.num_examples/batch_size)
        total_batch = 10
        for i in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(optm, feed_dict={x: batch_xs, y: batch_ys, keepratio:0.7})
            avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys, keepratio:1.})/total_batch
        if epoch % display_step == 0: 
            print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
            train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys, keepratio:1.})
            print (" Training accuracy: %.3f" % (train_acc))
        # save cnn 
        if epoch % save_step == 0:
            saver.save(sess,"save/nets/cnn_mnist_basic.ckpt-"+str(epoch))
print ("OPTIMIZATION FINISHED")
GRAPH READY
Epoch: 000/050 cost: 6.110828209
 Training accuracy: 0.438
Epoch: 001/050 cost: 2.037242019
 Training accuracy: 0.500
Epoch: 002/050 cost: 1.517006278
 Training accuracy: 0.594
Epoch: 003/050 cost: 1.262498641
 Training accuracy: 0.625
Epoch: 004/050 cost: 1.255524397
 Training accuracy: 0.688
Epoch: 005/050 cost: 0.985038066
 Training accuracy: 0.750
Epoch: 006/050 cost: 0.915997547
 Training accuracy: 0.812
Epoch: 007/050 cost: 0.737407154
 Training accuracy: 0.969
Epoch: 008/050 cost: 0.545771962
 Training accuracy: 0.906
Epoch: 009/050 cost: 0.554589051
 Training accuracy: 0.969
Epoch: 010/050 cost: 0.533016756
 Training accuracy: 0.938
Epoch: 011/050 cost: 0.419089320
 Training accuracy: 0.969
Epoch: 012/050 cost: 0.399274749
 Training accuracy: 0.844
Epoch: 013/050 cost: 0.459690556
 Training accuracy: 0.875
Epoch: 014/050 cost: 0.378500494
 Training accuracy: 0.906
Epoch: 015/050 cost: 0.394560142
 Training accuracy: 0.969
Epoch: 016/050 cost: 0.355333161
 Training accuracy: 1.000
Epoch: 017/050 cost: 0.315040293
 Training accuracy: 0.938
Epoch: 018/050 cost: 0.356254619
 Training accuracy: 0.969
Epoch: 019/050 cost: 0.256400244
 Training accuracy: 0.938
Epoch: 020/050 cost: 0.271380702
 Training accuracy: 0.938
Epoch: 021/050 cost: 0.260816705
 Training accuracy: 0.938
Epoch: 022/050 cost: 0.208365175
 Training accuracy: 0.969
Epoch: 023/050 cost: 0.260547988
 Training accuracy: 0.906
Epoch: 024/050 cost: 0.326942275
 Training accuracy: 0.906
Epoch: 025/050 cost: 0.249336516
 Training accuracy: 0.906
Epoch: 026/050 cost: 0.209552631
 Training accuracy: 0.906
Epoch: 027/050 cost: 0.195930877
 Training accuracy: 1.000
Epoch: 028/050 cost: 0.258798248
 Training accuracy: 0.875
Epoch: 029/050 cost: 0.199039068
 Training accuracy: 0.875
Epoch: 030/050 cost: 0.177726378
 Training accuracy: 1.000
Epoch: 031/050 cost: 0.202811215
 Training accuracy: 0.875
Epoch: 032/050 cost: 0.217131871
 Training accuracy: 0.969
Epoch: 033/050 cost: 0.159496994
 Training accuracy: 0.969
Epoch: 034/050 cost: 0.137549455
 Training accuracy: 0.906
Epoch: 035/050 cost: 0.143832839
 Training accuracy: 1.000
Epoch: 036/050 cost: 0.157423608
 Training accuracy: 1.000
Epoch: 037/050 cost: 0.158288965
 Training accuracy: 0.938
Epoch: 038/050 cost: 0.153203769
 Training accuracy: 1.000
Epoch: 039/050 cost: 0.178845123
 Training accuracy: 0.906
Epoch: 040/050 cost: 0.194754796
 Training accuracy: 0.969
Epoch: 041/050 cost: 0.176875217
 Training accuracy: 0.938
Epoch: 042/050 cost: 0.183280732
 Training accuracy: 0.969
Epoch: 043/050 cost: 0.149991347
 Training accuracy: 0.938
Epoch: 044/050 cost: 0.126169144
 Training accuracy: 0.938
Epoch: 045/050 cost: 0.105850566
 Training accuracy: 1.000
Epoch: 046/050 cost: 0.137898786
 Training accuracy: 1.000
Epoch: 047/050 cost: 0.137429339
 Training accuracy: 1.000
Epoch: 048/050 cost: 0.186689476
 Training accuracy: 0.969
Epoch: 049/050 cost: 0.159028540
 Training accuracy: 1.000
OPTIMIZATION FINISHED

模型的保存和读取

# 保存
import tensorflow as tf
v1 = tf.Variable(tf.random_normal([5,6]),name = 'v1')
v2 = tf.Variable(tf.random_normal([6,5]),name = 'v2')
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
    sess.run(init)
    print("v1:",sess.run(v1))
    print("v2",sess.run(v2))
    saver_path = saver.save(sess,"save/model.ckpt")
print("save ok!!!")
v1: [[-2.0177336  -1.1136355   1.024454   -0.2815749   0.01154548 -1.1226406 ]
 [ 1.006698    0.76033294 -0.33947533  1.551445   -1.0889654  -0.73033535]
 [-0.32807937 -0.24910568 -0.6470669  -0.81011033 -0.07812756  1.4114991 ]
 [ 0.9636228  -0.13516371 -1.3669498   0.50787073  0.12294756 -0.36570865]
 [-0.5258639  -1.0277528  -0.8557054   1.7967008   1.2368261  -0.30665618]]
v2 [[ 0.7124849   0.7714897  -1.071532    1.4341043   0.29311004]
 [ 1.432193   -1.1275465   1.2273195   0.01674473 -0.7920494 ]
 [-1.3038324  -0.7959307   0.25624877  1.97907     1.2644556 ]
 [-1.2425808  -0.4206337   0.32848006 -0.41688338  1.5360992 ]
 [ 0.29132897 -0.3600611  -0.04206673  0.01056795  1.6042805 ]
 [ 1.7399644  -0.30788726  1.9578744   0.68094605 -0.24684483]]
save ok!!!
# 读取
import tensorflow as tf
# v1 = tf.Variable(tf.random_normal([5,6]),name = 'v1')
# v2 = tf.Variable(tf.random_normal([6,5]),name = 'v2')
saver = tf.train.Saver()
# tf.reset_default_graph()
with tf.Session() as sess:
    saver.restore(sess,"save/model.ckpt")
    print("v1:",sess.run(v1))
INFO:tensorflow:Restoring parameters from save/model.ckpt
v1: [[-2.0177336  -1.1136355   1.024454   -0.2815749   0.01154548 -1.1226406 ]
 [ 1.006698    0.76033294 -0.33947533  1.551445   -1.0889654  -0.73033535]
 [-0.32807937 -0.24910568 -0.6470669  -0.81011033 -0.07812756  1.4114991 ]
 [ 0.9636228  -0.13516371 -1.3669498   0.50787073  0.12294756 -0.36570865]
 [-0.5258639  -1.0277528  -0.8557054   1.7967008   1.2368261  -0.30665618]]
do_train = 0
# tf.reset_default_graph()
if do_train == 0:
    epoch = training_epochs - 1
    saver.restore(sess,"save/nets/cnn_mnist_basic.ckpt-"+str(epoch))
    test_acc = sess.run(accr, feed_dict={x: test_img, y: test_label, keepratio:1.})
    print("test acc: %s"%(test_acc))
INFO:tensorflow:Restoring parameters from save/nets/cnn_mnist_basic.ckpt-49
test acc: 0.9627

标签:内附,train,Epoch,cost,tf,tensorflow,源代码,050,accuracy
来源: https://blog.csdn.net/flinjin/article/details/104904594