机器学习笔记(二十一)——Tensorflow 2(卷积与池化)
作者:互联网
本博客仅用于个人学习,不用于传播教学,主要是记自己能够看得懂的笔记(
学习知识来自:【吴恩达团队Tensorflow2.0实践系列课程第一课】TensorFlow2.0中基于TensorFlow2.0的人工智能、机器学习和深度学习简介及基础编程_哔哩哔哩_bilibili
果然还是大佬讲课好一点,有趣一点。
首先讲讲卷积。。。。。。算了卷积没什么好讲的,自己看去吧(卷积神经网络CNN完全指南终极版(一) - 知乎 (zhihu.com),其实非常好理解,就是对图片进行一个过滤处理。
关于池化,也很好理解,比如把一个2*2的像素块合并为一个像素,这个像素的值可以是4个像素中的最大值,也可以是平均值。这样可以简化信息。
还有其他一些操作,我在代码中会标出。
import tensorflow as tf import matplotlib.pyplot as plt class myCallback(tf.keras.callbacks.Callback): #大佬给的判断Loss来退出训练的方法,我不懂∑( 口 || def on_epoch_end(self,epoch,logs={}): if logs.get('accuracy')>0.9: #accuracy大于0.9,停止训练 print('\nReached 90% accuracy so cancelling training!') self.model.stop_training=True callback=myCallback() #调用类方法 mnist=tf.keras.datasets.fashion_mnist (x_train,y_train),(x_test,y_test)=mnist.load_data() x_train,x_test=x_train/255.0,x_test/255.0 x_train=x_train.reshape(60000,28,28,1) #为了输入卷积层必须改变一哈形状,当然只是多加了一层 x_test=x_test.reshape(10000,28,28,1) model=tf.keras.Sequential([ tf.keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=(28,28,1)), #卷积层,输入必须为(长度,宽度,深度),用了64个过滤器,3*3的卷积窗口,并且去除负数 tf.keras.layers.MaxPooling2D(2,2), #池化层,2*2的池化 tf.keras.layers.Conv2D(64,(3,3),activation='relu'), #再来一层 tf.keras.layers.MaxPooling2D(2,2), #再来一层 tf.keras.layers.Flatten(), tf.keras.layers.Dense(128,activation='relu'), tf.keras.layers.Dropout(0.2), #这是防止过拟合的Dropout层。可以将一些不重要的感知器丢弃。 tf.keras.layers.Dense(10,activation='softmax') ]) model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) print(model.summary()) #输出model的基本信息 his=model.fit(x_train,y_train,epochs=10,validation_data=(x_test,y_test),batch_size=64,callbacks=[callback]) #validation_data用于将模型在另一组数据上进行测试,可以是测试数据;batch_size是每次训练的一批数据的多少;callbacks就是召回函数,用于停止训练 plt.plot(his.epoch,his.history.get('loss'),label='loss') plt.plot(his.epoch,his.history.get('val_loss'),label='val_loss') #画出Loss以及test data的Loss的变化 plt.show() plt.plot(his.epoch,his.history.get('accuracy'),label='accuracy') plt.plot(his.epoch,his.history.get('val_accuracy'),label='val_accuracy') #画出accuracy以及test data的accuracy的变化 plt.show() model.evaluate(x_test,y_test)
得到结果:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 64) 640
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 64) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 64) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 5, 5, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 1600) 0
_________________________________________________________________
dense (Dense) (None, 128) 204928
_________________________________________________________________
dropout (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 1290
=================================================================
Total params: 243,786
Trainable params: 243,786
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/10
2021-08-09 12:19:51.791600: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2021-08-09 12:19:51.988593: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-08-09 12:19:52.644098: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
938/938 [==============================] - 12s 13ms/step - loss: 0.5003 - accuracy: 0.8183 - val_loss: 0.3815 - val_accuracy: 0.8588
Epoch 2/10
938/938 [==============================] - 13s 14ms/step - loss: 0.3294 - accuracy: 0.8791 - val_loss: 0.3131 - val_accuracy: 0.8867
Epoch 3/10
938/938 [==============================] - 13s 14ms/step - loss: 0.2796 - accuracy: 0.8970 - val_loss: 0.2901 - val_accuracy: 0.8944
Epoch 4/10
935/938 [============================>.] - ETA: 0s - loss: 0.2506 - accuracy: 0.9069
Reached 90% accuracy so cancelling training!
938/938 [==============================] - 13s 14ms/step - loss: 0.2507 - accuracy: 0.9069 - val_loss: 0.2686 - val_accuracy: 0.8988
313/313 [==============================] - 1s 4ms/step - loss: 0.2686 - accuracy: 0.8988
参考博客:
卷积神经网络CNN完全指南终极版(一) - 知乎 (zhihu.com)
Keras.layers.Conv2D参数详解 搭建图片分类 CNN (卷积神经网络) - 汪洋大海 – 蜗居 (woj.app)
如果MNIST数据集中有60000个图像,为什么该模型仅对1875个训练集图像进行训练? - IT屋-程序员软件开发技术分享社区 (it1352.com)
标签:loss,val,keras,卷积,test,池化,tf,Tensorflow,accuracy 来源: https://www.cnblogs.com/lunnyliu/p/15118093.html