其他分享
首页 > 其他分享> > 【神经网络学习笔记】卷积神经网络之用Tensorflow2.x keras实现吴恩达深度学习第四课第一周作业

【神经网络学习笔记】卷积神经网络之用Tensorflow2.x keras实现吴恩达深度学习第四课第一周作业

作者:互联网

Tensorflow2.x keras实现吴恩达第四课第一周作业

目录


本文参考和宽大佬的文章,但他用的是Tensorflow1的版本,对于现在来说感觉有点过时,网上我也没有查到有tf2实现的版本,因此就自己钻研着来写一个,刚开始用纯TF2来写,发现写到一半卡住了怎么也没找到解决办法,然后一直查资料才在keras上找到解决方案,那就顺势把整个过程用keras来写了。

一、前期准备

1.1 导入包

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import tensorflow.compat.v1 as tf
from tensorflow.python.framework import ops
import tensorflow as tf2
import cnn_utils
from tensorflow import keras
import time

1.2 加载并预处理数据

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = cnn_utils.load_dataset()
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = cnn_utils.convert_to_one_hot(Y_train_orig, 6).T
Y_test = cnn_utils.convert_to_one_hot(Y_test_orig, 6).T

拿其中一张图片出来看看

index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
plt.show()
y = 2

在这里插入图片描述

然后再看看预处理后数据的维度

print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)

OK基本没问题了

二、搭建模型

Keras搭建模型非常简单

def model2(X_train, Y_train, X_test, Y_test, learning_rate=0.009,
         num_epochs=100,minibatch_size=64,print_cost=True,isPlot=True):
    model = keras.models.Sequential()
    model.add(keras.layers.Conv2D(filters=32,kernel_size=3,padding='same',activation='relu',input_shape=(64,64,3)))
    model.add(keras.layers.MaxPool2D(pool_size=2,padding='same'))
    model.add(keras.layers.Conv2D(filters=64,kernel_size=3,padding='same',activation='relu'))
    model.add(keras.layers.MaxPool2D(pool_size=2,padding='same'))

    model.add(keras.layers.Flatten())

    model.add(keras.layers.Dense(6,activation='softmax'))

    model.compile(optimizer='Adam',
                  loss='mse',
                  metrics = ['categorical_accuracy'])
    model.summary()
    history = model.fit(x=X_train,y=Y_train,batch_size=64,epochs=num_epochs)
    score = model.evaluate(x=X_test,y=Y_test)
    model.save('model.h5')
    return score,history

可能看不懂?没关系,简单解释一下:

  1. 首先从keras.models.Sequential()创建一个模型
  2. model.add()往模型里面添加所需要的层以及对应的操作,比如:
    • keras.layers.Conv2D()是添加一个卷积层,里面的参数如果学过应该都知道
    • keras.layers.MaxPool2D()是添加要给最大值池化层
    • keras.layers.Flatten()则是将多通道的输入扁平化为单通道
    • keras.layers.Dense()则是将卷积完成后进行全连接,后面就是普通的神经网络的操作了
  3. model.complie()是将模型进行编译,参数要选择你需要的优化器和损失函数等等,因为模型已经用计算图帮助我们进行反向传播的计算,因此我们只需要选择优化器就可以了
  4. model.summary()将模型结构打印输出。
  5. model.fit()根据训练集进行训练
  6. model.evaluate()使用测试集进行测试,返回损失值和准确率
  7. model.save()将训练好的模型打包保存,以便后续预测
start_time = time.perf_counter()
score,history = model2(X_train, Y_train, X_test, Y_test,num_epochs=150)
end_time = time.perf_counter()
print("CPU的执行时间 = " + str(end_time - start_time) + " 秒" )
print('测试集损失值:',str(score[0]*100)[:4]+'%')
print('测试集准确率:',str(score[1]*100)[:4]+'%')
plt.plot(np.squeeze(history.history['loss']))
plt.ylabel('loss')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(0.009))
plt.show()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_2 (Conv2D)            (None, 64, 64, 32)        896       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 32, 32, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 32, 32, 64)        18496     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 16384)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 98310     
=================================================================
Total params: 117,702
Trainable params: 117,702
Non-trainable params: 0
_________________________________________________________________
Epoch 1/150
17/17 [==============================] - 2s 125ms/step - loss: 0.1434 - categorical_accuracy: 0.1870
Epoch 2/150
17/17 [==============================] - 2s 121ms/step - loss: 0.1372 - categorical_accuracy: 0.2398
Epoch 3/150
17/17 [==============================] - 2s 120ms/step - loss: 0.1284 - categorical_accuracy: 0.4278
Epoch 4/150
17/17 [==============================] - 2s 120ms/step - loss: 0.1068 - categorical_accuracy: 0.5463
Epoch 5/150
17/17 [==============================] - 2s 127ms/step - loss: 0.0819 - categorical_accuracy: 0.6796
Epoch 6/150
17/17 [==============================] - 2s 124ms/step - loss: 0.0656 - categorical_accuracy: 0.7491
Epoch 7/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0566 - categorical_accuracy: 0.7824
Epoch 8/150
17/17 [==============================] - 2s 129ms/step - loss: 0.0451 - categorical_accuracy: 0.8426
Epoch 9/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0391 - categorical_accuracy: 0.8620
Epoch 10/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0307 - categorical_accuracy: 0.8991
Epoch 11/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0248 - categorical_accuracy: 0.9269
Epoch 12/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0201 - categorical_accuracy: 0.9361
Epoch 13/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0188 - categorical_accuracy: 0.9361
Epoch 14/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0166 - categorical_accuracy: 0.9463
Epoch 15/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0172 - categorical_accuracy: 0.9426
Epoch 16/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0119 - categorical_accuracy: 0.9685
Epoch 17/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0105 - categorical_accuracy: 0.9676
Epoch 18/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0089 - categorical_accuracy: 0.9722
Epoch 19/150
17/17 [==============================] - 2s 127ms/step - loss: 0.0070 - categorical_accuracy: 0.9787
Epoch 20/150
17/17 [==============================] - 2s 124ms/step - loss: 0.0067 - categorical_accuracy: 0.9833
Epoch 21/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0057 - categorical_accuracy: 0.9843
Epoch 22/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0048 - categorical_accuracy: 0.9889
Epoch 23/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0041 - categorical_accuracy: 0.9889
Epoch 24/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0035 - categorical_accuracy: 0.9907
Epoch 25/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0030 - categorical_accuracy: 0.9926
Epoch 26/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0029 - categorical_accuracy: 0.9926
Epoch 27/150
17/17 [==============================] - 2s 126ms/step - loss: 0.0028 - categorical_accuracy: 0.9926
Epoch 28/150
17/17 [==============================] - 2s 129ms/step - loss: 0.0028 - categorical_accuracy: 0.9926
Epoch 29/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0021 - categorical_accuracy: 0.9944
Epoch 30/150
17/17 [==============================] - 2s 128ms/step - loss: 0.0020 - categorical_accuracy: 0.9944
Epoch 31/150
17/17 [==============================] - 2s 126ms/step - loss: 0.0019 - categorical_accuracy: 0.9944
Epoch 32/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0017 - categorical_accuracy: 0.9944
Epoch 33/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0017 - categorical_accuracy: 0.9944
Epoch 34/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0017 - categorical_accuracy: 0.9944
Epoch 35/150
17/17 [==============================] - 2s 124ms/step - loss: 0.0017 - categorical_accuracy: 0.9944
Epoch 36/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0017 - categorical_accuracy: 0.9944
Epoch 37/150
17/17 [==============================] - 2s 128ms/step - loss: 0.0016 - categorical_accuracy: 0.9954
Epoch 38/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0014 - categorical_accuracy: 0.9954
Epoch 39/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0014 - categorical_accuracy: 0.9954
Epoch 40/150
17/17 [==============================] - 2s 127ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 41/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0014 - categorical_accuracy: 0.9954
Epoch 42/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 43/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 44/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 45/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 46/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 47/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 48/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 49/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 50/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 51/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 52/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 53/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 54/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 55/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 56/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 57/150
17/17 [==============================] - 2s 129ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 58/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 59/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 60/150
17/17 [==============================] - 2s 127ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 61/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 62/150
17/17 [==============================] - 2s 124ms/step - loss: 0.0014 - categorical_accuracy: 0.9954
Epoch 63/150
17/17 [==============================] - 2s 126ms/step - loss: 0.0015 - categorical_accuracy: 0.9954
Epoch 64/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 65/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 66/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 67/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 68/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 69/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 70/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 71/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 72/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 73/150
17/17 [==============================] - 2s 124ms/step - loss: 0.0013 - categorical_accuracy: 0.9954
Epoch 74/150
17/17 [==============================] - 2s 125ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 75/150
17/17 [==============================] - 2s 121ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 76/150
17/17 [==============================] - 2s 117ms/step - loss: 0.0012 - categorical_accuracy: 0.9954
Epoch 77/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0011 - categorical_accuracy: 0.9963
Epoch 78/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0012 - categorical_accuracy: 0.9963
Epoch 79/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0011 - categorical_accuracy: 0.9963
Epoch 80/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0015 - categorical_accuracy: 0.9963
Epoch 81/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0011 - categorical_accuracy: 0.9963
Epoch 82/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0011 - categorical_accuracy: 0.9963
Epoch 83/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0015 - categorical_accuracy: 0.9954
Epoch 84/150
17/17 [==============================] - 2s 124ms/step - loss: 0.0013 - categorical_accuracy: 0.9963
Epoch 85/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0012 - categorical_accuracy: 0.9963
Epoch 86/150
17/17 [==============================] - 2s 123ms/step - loss: 9.4489e-04 - categorical_accuracy: 0.9963
Epoch 87/150
17/17 [==============================] - 2s 125ms/step - loss: 9.6031e-04 - categorical_accuracy: 0.9963
Epoch 88/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0010 - categorical_accuracy: 0.9963
Epoch 89/150
17/17 [==============================] - 2s 125ms/step - loss: 9.8840e-04 - categorical_accuracy: 0.9963
Epoch 90/150
17/17 [==============================] - 2s 126ms/step - loss: 0.0010 - categorical_accuracy: 0.9963
Epoch 91/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0032 - categorical_accuracy: 0.9880
Epoch 92/150
17/17 [==============================] - 2s 122ms/step - loss: 0.0043 - categorical_accuracy: 0.9843
Epoch 93/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0120 - categorical_accuracy: 0.9574
Epoch 94/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0060 - categorical_accuracy: 0.9806
Epoch 95/150
17/17 [==============================] - 2s 123ms/step - loss: 0.0023 - categorical_accuracy: 0.9944
Epoch 96/150
17/17 [==============================] - 2s 119ms/step - loss: 0.0016 - categorical_accuracy: 0.9954
Epoch 97/150
17/17 [==============================] - 2s 120ms/step - loss: 0.0011 - categorical_accuracy: 0.9963
Epoch 98/150
17/17 [==============================] - 2s 120ms/step - loss: 9.4232e-04 - categorical_accuracy: 0.9972
Epoch 99/150
17/17 [==============================] - 2s 118ms/step - loss: 9.9511e-04 - categorical_accuracy: 0.9963
Epoch 100/150
17/17 [==============================] - 2s 125ms/step - loss: 7.8027e-04 - categorical_accuracy: 0.9972
Epoch 101/150
17/17 [==============================] - 2s 120ms/step - loss: 7.4085e-04 - categorical_accuracy: 0.9972
Epoch 102/150
17/17 [==============================] - 2s 122ms/step - loss: 8.0569e-04 - categorical_accuracy: 0.9972
Epoch 103/150
17/17 [==============================] - 2s 127ms/step - loss: 7.9701e-04 - categorical_accuracy: 0.9972
Epoch 104/150
17/17 [==============================] - 2s 119ms/step - loss: 7.1822e-04 - categorical_accuracy: 0.9972
Epoch 105/150
17/17 [==============================] - 2s 126ms/step - loss: 7.4790e-04 - categorical_accuracy: 0.9972
Epoch 106/150
17/17 [==============================] - 2s 123ms/step - loss: 7.7353e-04 - categorical_accuracy: 0.9972
Epoch 107/150
17/17 [==============================] - 2s 126ms/step - loss: 8.1939e-04 - categorical_accuracy: 0.9972
Epoch 108/150
17/17 [==============================] - 2s 123ms/step - loss: 8.1882e-04 - categorical_accuracy: 0.9972
Epoch 109/150
17/17 [==============================] - 2s 120ms/step - loss: 8.3484e-04 - categorical_accuracy: 0.9972
Epoch 110/150
17/17 [==============================] - 2s 127ms/step - loss: 7.1755e-04 - categorical_accuracy: 0.9972
Epoch 111/150
17/17 [==============================] - 2s 118ms/step - loss: 6.5136e-04 - categorical_accuracy: 0.9972
Epoch 112/150
17/17 [==============================] - 2s 128ms/step - loss: 6.5049e-04 - categorical_accuracy: 0.9972
Epoch 113/150
17/17 [==============================] - 2s 126ms/step - loss: 6.5342e-04 - categorical_accuracy: 0.9972
Epoch 114/150
17/17 [==============================] - 2s 118ms/step - loss: 7.0961e-04 - categorical_accuracy: 0.9972
Epoch 115/150
17/17 [==============================] - 2s 122ms/step - loss: 6.4897e-04 - categorical_accuracy: 0.9972
Epoch 116/150
17/17 [==============================] - 2s 119ms/step - loss: 7.8690e-04 - categorical_accuracy: 0.9972
Epoch 117/150
17/17 [==============================] - 2s 120ms/step - loss: 7.1175e-04 - categorical_accuracy: 0.9972
Epoch 118/150
17/17 [==============================] - 2s 121ms/step - loss: 6.4902e-04 - categorical_accuracy: 0.9972
Epoch 119/150
17/17 [==============================] - 2s 117ms/step - loss: 7.1842e-04 - categorical_accuracy: 0.9972
Epoch 120/150
17/17 [==============================] - 2s 122ms/step - loss: 7.9373e-04 - categorical_accuracy: 0.9972
Epoch 121/150
17/17 [==============================] - 2s 118ms/step - loss: 7.6994e-04 - categorical_accuracy: 0.9972
Epoch 122/150
17/17 [==============================] - 2s 120ms/step - loss: 7.1956e-04 - categorical_accuracy: 0.9972
Epoch 123/150
17/17 [==============================] - 2s 121ms/step - loss: 7.7523e-04 - categorical_accuracy: 0.9972
Epoch 124/150
17/17 [==============================] - 2s 119ms/step - loss: 8.4602e-04 - categorical_accuracy: 0.9972
Epoch 125/150
17/17 [==============================] - 2s 122ms/step - loss: 7.1313e-04 - categorical_accuracy: 0.9972
Epoch 126/150
17/17 [==============================] - 2s 121ms/step - loss: 7.1287e-04 - categorical_accuracy: 0.9972
Epoch 127/150
17/17 [==============================] - 2s 120ms/step - loss: 7.7493e-04 - categorical_accuracy: 0.9972
Epoch 128/150
17/17 [==============================] - 2s 120ms/step - loss: 7.5045e-04 - categorical_accuracy: 0.9972
Epoch 129/150
17/17 [==============================] - 2s 118ms/step - loss: 7.3766e-04 - categorical_accuracy: 0.9972
Epoch 130/150
17/17 [==============================] - 2s 124ms/step - loss: 7.8818e-04 - categorical_accuracy: 0.9972
Epoch 131/150
17/17 [==============================] - 2s 120ms/step - loss: 7.8422e-04 - categorical_accuracy: 0.9972
Epoch 132/150
17/17 [==============================] - 2s 121ms/step - loss: 7.2649e-04 - categorical_accuracy: 0.9972
Epoch 133/150
17/17 [==============================] - 2s 124ms/step - loss: 7.1504e-04 - categorical_accuracy: 0.9972
Epoch 134/150
17/17 [==============================] - 2s 124ms/step - loss: 7.6554e-04 - categorical_accuracy: 0.9972
Epoch 135/150
17/17 [==============================] - 2s 120ms/step - loss: 7.5087e-04 - categorical_accuracy: 0.9972
Epoch 136/150
17/17 [==============================] - 2s 119ms/step - loss: 7.4297e-04 - categorical_accuracy: 0.9972
Epoch 137/150
17/17 [==============================] - 2s 118ms/step - loss: 7.9407e-04 - categorical_accuracy: 0.9972
Epoch 138/150
17/17 [==============================] - 2s 120ms/step - loss: 7.5712e-04 - categorical_accuracy: 0.9972
Epoch 139/150
17/17 [==============================] - 2s 119ms/step - loss: 7.7948e-04 - categorical_accuracy: 0.9972
Epoch 140/150
17/17 [==============================] - 2s 124ms/step - loss: 7.4604e-04 - categorical_accuracy: 0.9972
Epoch 141/150
17/17 [==============================] - 2s 119ms/step - loss: 7.6377e-04 - categorical_accuracy: 0.9972
Epoch 142/150
17/17 [==============================] - 2s 124ms/step - loss: 8.5071e-04 - categorical_accuracy: 0.9972
Epoch 143/150
17/17 [==============================] - 2s 124ms/step - loss: 7.4467e-04 - categorical_accuracy: 0.9972
Epoch 144/150
17/17 [==============================] - 2s 119ms/step - loss: 7.4509e-04 - categorical_accuracy: 0.9972
Epoch 145/150
17/17 [==============================] - 2s 123ms/step - loss: 6.7510e-04 - categorical_accuracy: 0.9972
Epoch 146/150
17/17 [==============================] - 2s 125ms/step - loss: 8.3715e-04 - categorical_accuracy: 0.9972
Epoch 147/150
17/17 [==============================] - 2s 121ms/step - loss: 8.8432e-04 - categorical_accuracy: 0.9972
Epoch 148/150
17/17 [==============================] - 2s 129ms/step - loss: 7.9914e-04 - categorical_accuracy: 0.9972
Epoch 149/150
17/17 [==============================] - 2s 123ms/step - loss: 7.2990e-04 - categorical_accuracy: 0.9972
Epoch 150/150
17/17 [==============================] - 2s 125ms/step - loss: 7.6336e-04 - categorical_accuracy: 0.9972
4/4 [==============================] - 0s 8ms/step - loss: 0.0221 - categorical_accuracy: 0.9083
CPU的执行时间 = 332.1746858 秒
测试集损失值: 2.21%
测试集准确率: 90.8%

在这里插入图片描述

训练完成后我们可以看到在目录下可以看到一个新创建的文件,model.h5,现在我们用这个模型来对自己的图片进行预测。

三、预测

首先,自己拍一张手指的图片,然后自己裁剪一下成为64*64的图片,可以用画图工具,也可以用程序自动完成,我这里就自己随便弄了一下,然后导入几个包。

from PIL import Image
from tensorflow.keras.models import load_model

把图片引入进来,然后用numpy多加一个维度

img = Image.open('datasets/3.jpg')
my_finger = np.array(img)
data = my_finger.reshape(1,64,64,3)

plt.imshow(my_finger)
plt.show()

在这里插入图片描述

图片已经准备完毕,用我们训练好的模型来预测。

model = load_model('model.h5')
predict = model.predict(data)
# 这里是将热编码转为数字
number = predict[0].tolist().index(1)

print('图中数字为:',number)
图中数字为: 4

预测成功,实验到此结束~

四、总结

由于对知识和框架理解不到位,中途卡了很久,不过最后还是写出来了,只能说用框架真的省事很多,但不理解原理的话也用不好框架。最后cnn.utils.py和训练的数据集请到原文下载

链接: 【中文】【吴恩达课后编程作业】Course 4 - 卷积神经网络 - 第一周作业 - 搭建卷积神经网络模型以及应用

标签:第四课,Tensorflow2,150,17,categorical,step,神经网络,loss,accuracy
来源: https://blog.csdn.net/weixin_40764047/article/details/110201419