其他分享
首页 > 其他分享> > 【深度学习01】Fashion MNIST数据集分类模型笔记

【深度学习01】Fashion MNIST数据集分类模型笔记

作者:互联网

0 前置

We start by building a simple image classifier using MLPs.


如果之前Anaconda没有安装过tensorflow模块,就会报错:
No module named ‘tensorflow’
原因:未安装相应的包
解决方式:输入以下代码安装即可,过程可能会花几分钟

pip install tensorflow

调用常用的module

# import Keras & Tensorflow
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
from tensorflow import keras
# 查看相应的版本
print(tf.__version__)
print(keras.__version__)
2.4.1
2.4.0

1 载入数据集

我们使用的是Keras提供的Fashion MNIST数据集。

首先介绍一下Fashion MNIST数据集,它由7万张灰度图像组成,可以分成10个类别.每个灰度图像都是28*28像素的图像.我们将使用其中的6万张进行training,另外的1万张来test.

每个图像都可以理解为一个28×28的numpy数组,每个像素值在0到255之间.标签取值在0-9之间,代表着衣服的类型,如下表所示。

标签取值衣服类型
0T-shirt/top
1Trouser
2Pullover
3Dress
4Coat
5Sandal
6Skirt
7Sneaker
8Bag
9Ankle boot

载入数据集,载入训练集和测试集

# Load image data 下载数据集
fashion_mnist = keras.datasets.fashion_mnist
# 载入训练集和测试集(默认顺序)
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
# 定义名称list(方便label转名称)
# 即:在代码上补充一下每个类别的名称,组成一个list,代码如下:
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]

2 数据预处理

探索数据

数据集维度

之前说过,我们的训练集由6万张2828像素的图像组成,所以可以得到训练集的特征的大小是(60000,28,28),同样测试集是由1万张2828像素的图像组成的,所以它的大小是(10000,28,28)

# Show information: traing/test size
# 打印数据集维度
print('Training data size: ', x_train.shape)
print('Test data size: ', x_test.shape)
Training data size:  (60000, 28, 28)
Test data size:  (10000, 28, 28)

寻找训练/测试集的图像标签

# 寻找训练/测试集的图像标签
print(class_names[y_train[0]])
print(class_names[y_test[0]])
Ankle boot
Ankle boot

打印归一化之前训练集的最大值和最小值

print(np.max(x_train),np.min(x_train))#打印归一化之前训练集的最大值和最小值
255 0

展示一张图片

# show an image
import numpy as np
import matplotlib.pyplot as plt
plt.imshow(x_train[0, ], cmap = 'gray')
plt.show()

在这里插入图片描述
展示一张图片

plt.figure()
plt.imshow(x_train[0])
plt.colorbar()
plt.grid(False)

在这里插入图片描述

#显示多张照片
plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(x_train[i], cmap=plt.cm.binary)
    plt.xlabel(class_names[y_train[i]])

在这里插入图片描述

归一化

# Data preparation:
# Map intensities from [0--255] to 0.0--1.0
# 数据归一化(由0:255 -> 0:1,数据量压缩,不易产生溢出,易于训练)
x_train = x_train / 255.0
x_test = x_test / 255.0

3 开始训练

#构建MLP分类器,使用来自Keras的Sequential模型
#第一步,创建Sequnetial对象
model = keras.models.Sequential()

#第二步,在对象中创建层次(全连接)
#将图片展开,将28*28的矩阵展平为28*28的一维向量
model.add(keras.layers.Flatten(input_shape=[28, 28]))

#全链接层,神经网络的一种,以层次来发掘神经网络,下层单元与上层单元一一链接
#300代表神经元个数,activation是激活函数
model.add(keras.layers.Dense(300, activation="relu"))  
 
#添加两个,relu:y=max(0,x)
model.add(keras.layers.Dense(100, activation="relu"))

#因为我们应该得到长度为10的概率分布,所以最后一层输出层神经元个数为10
#一般最后多元分类输出层激活函数为softmax
#控制输出,长度为10的向量。
#softmax将向量变成概率分布,x=[x1,x2,x3]
#softmax运算:y=[e^x1/sum,e^x2/sum,e^x3/sum],(sum=e^x1+e^x2+e^x3)
model.add(keras.layers.Dense(10, activation="softmax"))
#在这里构建了四层的神经网络,使用了sgd方法。

#接着让我们看一下模型参数,利用model.summary()函数。
model.summary() 
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 300)               235500    
_________________________________________________________________
dense_1 (Dense)              (None, 100)               30100     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1010      
=================================================================
Total params: 266,610
Trainable params: 266,610
Non-trainable params: 0
_________________________________________________________________

激活函数get
计算目标函数

调用compile函数

y是向量,loss = categorical_crossentropy,

y是数,loss=“sparse_categorical_crossentropy”

optimizer:模型求解调整方法

metrics:除了关心目标函数以外,还要考虑到其他方面,accuracy

# Compile the model
#将损失函数和优化方法加到图中去。
#y为长度等于样本数量的向量,故y为一个值,所以用sparse_categorical_crossentropy
model.compile(loss = "sparse_categorical_crossentropy", optimizer="sgd", metrics = ["accuracy"])
#参数(损失函数,调整方法,其他所关心指标)

优化器是AdamOptimizer(表示采用何种方式寻找最佳答案,有什么梯度下降啊等等),

损失函数是sparse_categorical_crossentropy(就是损失函数怎么定义的,最佳值就是使得损失函数最小).

计量标准是准确率,也就是正确归类的图像的概率.

进行训练:

# Model training
# epoch:遍历训练集次数
# 开始训练,调用model.fit函数,进行30次训练:
history = model.fit(x_train, y_train, epochs = 30, validation_split=0.1)
Epoch 1/30
1688/1688 [==============================] - 3s 1ms/step - loss: 875864866629.4004 - accuracy: 0.0998 - val_loss: 2.3027 - val_accuracy: 0.0985
Epoch 2/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0974 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 3/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3025 - accuracy: 0.1002 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 4/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0995 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 5/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0984 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 6/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1010 - val_loss: 2.3028 - val_accuracy: 0.1008
Epoch 7/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1007 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 8/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1019 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 9/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1003 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 10/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0993 - val_loss: 2.3026 - val_accuracy: 0.0925
Epoch 11/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3027 - accuracy: 0.0983 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 12/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1004 - val_loss: 2.3029 - val_accuracy: 0.0942
Epoch 13/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1012 - val_loss: 2.3028 - val_accuracy: 0.0942
Epoch 14/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3027 - accuracy: 0.0991 - val_loss: 2.3027 - val_accuracy: 0.0942
Epoch 15/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1000 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 16/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0979 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 17/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0997 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 18/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0997 - val_loss: 2.3029 - val_accuracy: 0.0973
Epoch 19/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3027 - accuracy: 0.1013 - val_loss: 2.3028 - val_accuracy: 0.0973
Epoch 20/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3027 - accuracy: 0.1017 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 21/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0991 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 22/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0986 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 23/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0997 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 24/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0996 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 25/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1002 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 26/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.0998 - val_loss: 2.3028 - val_accuracy: 0.0985
Epoch 27/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1005 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 28/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1002 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 29/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3026 - accuracy: 0.1014 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 30/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3025 - accuracy: 0.1028 - val_loss: 2.3028 - val_accuracy: 0.0925

训练完成

history即keras的callbacks,

history.history全局变量存储了训练过程中的一些值,准确率,损失率等

看一下训练准确率,并将history里保存的数据绘制成函数图,直观一点。

# Visualise loss/accuracy during training
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()

# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()

在这里插入图片描述
在这里插入图片描述

评估准确率

在测试集上完成准确率的评估,调用的是model.evaluate()

# Evaluate the model on the test set
model.evaluate(x_test, y_test)
313/313 [==============================] - 0s 895us/step - loss: 5.8619 - accuracy: 0.1000





[5.861933708190918, 0.10000000149011612]
test_loss, test_acc = model.evaluate(x_test, y_test)

print('Test accuracy:', test_acc)
313/313 [==============================] - 0s 946us/step - loss: 5.8619 - accuracy: 0.1000
Test accuracy: 0.10000000149011612
# training with large learning rate
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))   
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss = "sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=0.9), metrics = ["accuracy"])
history = model.fit(x_train, y_train, epochs = 30, validation_split=0.1)
Epoch 1/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2107530128999905906036991590400.0000 - accuracy: 0.0995 - val_loss: 2.3187 - val_accuracy: 0.1048
Epoch 2/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3128 - accuracy: 0.1003 - val_loss: 2.3104 - val_accuracy: 0.1008
Epoch 3/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3094 - accuracy: 0.0989 - val_loss: 2.3160 - val_accuracy: 0.0942
Epoch 4/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3095 - accuracy: 0.0975 - val_loss: 2.3088 - val_accuracy: 0.1008
Epoch 5/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3089 - accuracy: 0.1026 - val_loss: 2.3135 - val_accuracy: 0.1032
Epoch 6/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3100 - accuracy: 0.0975 - val_loss: 2.3111 - val_accuracy: 0.0973
Epoch 7/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3090 - accuracy: 0.0992 - val_loss: 2.3149 - val_accuracy: 0.1008
Epoch 8/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3087 - accuracy: 0.1003 - val_loss: 2.3084 - val_accuracy: 0.0973
Epoch 9/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3100 - accuracy: 0.0981 - val_loss: 2.3090 - val_accuracy: 0.1050
Epoch 10/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3090 - accuracy: 0.1016 - val_loss: 2.3125 - val_accuracy: 0.1003
Epoch 11/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3102 - accuracy: 0.0971 - val_loss: 2.3111 - val_accuracy: 0.0942
Epoch 12/30
1688/1688 [==============================] - 3s 1ms/step - loss: 2.3086 - accuracy: 0.1013 - val_loss: 2.3139 - val_accuracy: 0.0973
Epoch 13/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3084 - accuracy: 0.0998 - val_loss: 2.3100 - val_accuracy: 0.1055
Epoch 14/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3086 - accuracy: 0.1035 - val_loss: 2.3108 - val_accuracy: 0.0925
Epoch 15/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3094 - accuracy: 0.0976 - val_loss: 2.3151 - val_accuracy: 0.1055
Epoch 16/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3102 - accuracy: 0.0974 - val_loss: 2.3076 - val_accuracy: 0.1003
Epoch 17/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3093 - accuracy: 0.0999 - val_loss: 2.3085 - val_accuracy: 0.1032
Epoch 18/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3100 - accuracy: 0.0988 - val_loss: 2.3073 - val_accuracy: 0.1032
Epoch 19/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3090 - accuracy: 0.0984 - val_loss: 2.3085 - val_accuracy: 0.1055
Epoch 20/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3091 - accuracy: 0.0989 - val_loss: 2.3084 - val_accuracy: 0.1027
Epoch 21/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3093 - accuracy: 0.0992 - val_loss: 2.3118 - val_accuracy: 0.0942
Epoch 22/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3099 - accuracy: 0.0985 - val_loss: 2.3123 - val_accuracy: 0.1050
Epoch 23/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3086 - accuracy: 0.1015 - val_loss: 2.3109 - val_accuracy: 0.1027
Epoch 24/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3094 - accuracy: 0.0980 - val_loss: 2.3067 - val_accuracy: 0.1050
Epoch 25/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3094 - accuracy: 0.0999 - val_loss: 2.3083 - val_accuracy: 0.0973
Epoch 26/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3095 - accuracy: 0.0973 - val_loss: 2.3094 - val_accuracy: 0.1003
Epoch 27/30
1688/1688 [==============================] - 3s 2ms/step - loss: 2.3095 - accuracy: 0.0992 - val_loss: 2.3086 - val_accuracy: 0.1027
Epoch 28/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3095 - accuracy: 0.1008 - val_loss: 2.3115 - val_accuracy: 0.0985
Epoch 29/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3095 - accuracy: 0.0997 - val_loss: 2.3067 - val_accuracy: 0.1050
Epoch 30/30
1688/1688 [==============================] - 2s 1ms/step - loss: 2.3093 - accuracy: 0.1012 - val_loss: 2.3142 - val_accuracy: 0.0942
# training with small learning rate
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))   
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss = "sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-5), metrics = ["accuracy"])
history = model.fit(x_train, y_train, epochs = 30, validation_split=0.1)
Epoch 1/30
1688/1688 [==============================] - 5s 3ms/step - loss: 2.2788 - accuracy: 0.1436 - val_loss: 2.2498 - val_accuracy: 0.1610
Epoch 2/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.2346 - accuracy: 0.1763 - val_loss: 2.2063 - val_accuracy: 0.1922
Epoch 3/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.1929 - accuracy: 0.2103 - val_loss: 2.1662 - val_accuracy: 0.2222
Epoch 4/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.1513 - accuracy: 0.2436 - val_loss: 2.1288 - val_accuracy: 0.2557
Epoch 5/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.1171 - accuracy: 0.2743 - val_loss: 2.0937 - val_accuracy: 0.2962
Epoch 6/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.0839 - accuracy: 0.3114 - val_loss: 2.0604 - val_accuracy: 0.3317
Epoch 7/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.0496 - accuracy: 0.3521 - val_loss: 2.0287 - val_accuracy: 0.3610
Epoch 8/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.0193 - accuracy: 0.3780 - val_loss: 1.9982 - val_accuracy: 0.3857
Epoch 9/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.9875 - accuracy: 0.3997 - val_loss: 1.9689 - val_accuracy: 0.4063
Epoch 10/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.9626 - accuracy: 0.4178 - val_loss: 1.9405 - val_accuracy: 0.4237
Epoch 11/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.9353 - accuracy: 0.4351 - val_loss: 1.9131 - val_accuracy: 0.4388
Epoch 12/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.9075 - accuracy: 0.4483 - val_loss: 1.8865 - val_accuracy: 0.4540
Epoch 13/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.8815 - accuracy: 0.4637 - val_loss: 1.8609 - val_accuracy: 0.4707
Epoch 14/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.8567 - accuracy: 0.4775 - val_loss: 1.8360 - val_accuracy: 0.4810
Epoch 15/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.8314 - accuracy: 0.4852 - val_loss: 1.8119 - val_accuracy: 0.4972
Epoch 16/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.8103 - accuracy: 0.4974 - val_loss: 1.7885 - val_accuracy: 0.5112
Epoch 17/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.7848 - accuracy: 0.5103 - val_loss: 1.7658 - val_accuracy: 0.5237
Epoch 18/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.7628 - accuracy: 0.5235 - val_loss: 1.7438 - val_accuracy: 0.5313
Epoch 19/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.7412 - accuracy: 0.5328 - val_loss: 1.7224 - val_accuracy: 0.5425
Epoch 20/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.7243 - accuracy: 0.5390 - val_loss: 1.7016 - val_accuracy: 0.5532
Epoch 21/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.6995 - accuracy: 0.5555 - val_loss: 1.6814 - val_accuracy: 0.5602
Epoch 22/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.6794 - accuracy: 0.5624 - val_loss: 1.6617 - val_accuracy: 0.5668
Epoch 23/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.6591 - accuracy: 0.5714 - val_loss: 1.6425 - val_accuracy: 0.5763
Epoch 24/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.6422 - accuracy: 0.5781 - val_loss: 1.6238 - val_accuracy: 0.5868
Epoch 25/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.6219 - accuracy: 0.5858 - val_loss: 1.6056 - val_accuracy: 0.5955
Epoch 26/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.6063 - accuracy: 0.5930 - val_loss: 1.5879 - val_accuracy: 0.6022
Epoch 27/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.5869 - accuracy: 0.5972 - val_loss: 1.5705 - val_accuracy: 0.6065
Epoch 28/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.5696 - accuracy: 0.6062 - val_loss: 1.5536 - val_accuracy: 0.6100
Epoch 29/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.5524 - accuracy: 0.6072 - val_loss: 1.5371 - val_accuracy: 0.6142
Epoch 30/30
1688/1688 [==============================] - 4s 3ms/step - loss: 1.5391 - accuracy: 0.6125 - val_loss: 1.5210 - val_accuracy: 0.6195
# training with zero kernel initializer => very poor performance, must not use!
#训练用零内核初始化器=>性能很差,千万不要用!
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="zeros"))   
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="zeros"))
model.add(keras.layers.Dense(10, activation="softmax", kernel_initializer="zeros"))
model.compile(loss = "sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(), metrics = ["accuracy"])
history = model.fit(x_train, y_train, epochs = 30, validation_split=0.1)
Epoch 1/30
1688/1688 [==============================] - 5s 3ms/step - loss: 2.3026 - accuracy: 0.1000 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 2/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.0992 - val_loss: 2.3029 - val_accuracy: 0.0942
Epoch 3/30
1688/1688 [==============================] - 4s 2ms/step - loss: 2.3026 - accuracy: 0.1021 - val_loss: 2.3027 - val_accuracy: 0.1003
Epoch 4/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0986 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 5/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1007 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 6/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1009 - val_loss: 2.3028 - val_accuracy: 0.0973
Epoch 7/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1006 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 8/30
1688/1688 [==============================] - 4s 2ms/step - loss: 2.3027 - accuracy: 0.0977 - val_loss: 2.3029 - val_accuracy: 0.0942
Epoch 9/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0984 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 10/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0988 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 11/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0985 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 12/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1013 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 13/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0977 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 14/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1036 - val_loss: 2.3028 - val_accuracy: 0.0942
Epoch 15/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1009 - val_loss: 2.3027 - val_accuracy: 0.0985
Epoch 16/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1022 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 17/30
1688/1688 [==============================] - 4s 2ms/step - loss: 2.3026 - accuracy: 0.1035 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 18/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1022 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 19/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.0999 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 20/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1007 - val_loss: 2.3028 - val_accuracy: 0.0973
Epoch 21/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0990 - val_loss: 2.3028 - val_accuracy: 0.1008
Epoch 22/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0993 - val_loss: 2.3028 - val_accuracy: 0.0925
Epoch 23/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3025 - accuracy: 0.1030 - val_loss: 2.3027 - val_accuracy: 0.0985
Epoch 24/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0987 - val_loss: 2.3027 - val_accuracy: 0.0925
Epoch 25/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0964 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 26/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1023 - val_loss: 2.3028 - val_accuracy: 0.0942
Epoch 27/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1021 - val_loss: 2.3027 - val_accuracy: 0.1050
Epoch 28/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0992 - val_loss: 2.3029 - val_accuracy: 0.0925
Epoch 29/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3026 - accuracy: 0.1038 - val_loss: 2.3027 - val_accuracy: 0.0973
Epoch 30/30
1688/1688 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0993 - val_loss: 2.3028 - val_accuracy: 0.0973
# Batch normalization
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(300, activation="relu"))   
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss = "sparse_categorical_crossentropy", optimizer="sgd", metrics = ["accuracy"])
history = model.fit(x_train, y_train, epochs = 30, validation_split=0.1)
Epoch 1/30
1688/1688 [==============================] - 8s 4ms/step - loss: 0.6864 - accuracy: 0.7674 - val_loss: 0.3905 - val_accuracy: 0.8575
Epoch 2/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.4030 - accuracy: 0.8586 - val_loss: 0.3723 - val_accuracy: 0.8603
Epoch 3/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.3585 - accuracy: 0.8710 - val_loss: 0.3426 - val_accuracy: 0.8737
Epoch 4/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.3196 - accuracy: 0.8850 - val_loss: 0.3340 - val_accuracy: 0.8795
Epoch 5/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.3095 - accuracy: 0.8886 - val_loss: 0.3250 - val_accuracy: 0.8803
Epoch 6/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2886 - accuracy: 0.8954 - val_loss: 0.3167 - val_accuracy: 0.8845
Epoch 7/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2776 - accuracy: 0.8982 - val_loss: 0.3175 - val_accuracy: 0.8843
Epoch 8/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2566 - accuracy: 0.9060 - val_loss: 0.3145 - val_accuracy: 0.8873
Epoch 9/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2469 - accuracy: 0.9113 - val_loss: 0.3148 - val_accuracy: 0.8852
Epoch 10/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2360 - accuracy: 0.9152 - val_loss: 0.3194 - val_accuracy: 0.8853
Epoch 11/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2313 - accuracy: 0.9159 - val_loss: 0.3095 - val_accuracy: 0.8887
Epoch 12/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2214 - accuracy: 0.9205 - val_loss: 0.3126 - val_accuracy: 0.8892
Epoch 13/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2109 - accuracy: 0.9216 - val_loss: 0.3098 - val_accuracy: 0.8918
Epoch 14/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.2016 - accuracy: 0.9263 - val_loss: 0.3147 - val_accuracy: 0.8883
Epoch 15/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1964 - accuracy: 0.9282 - val_loss: 0.3251 - val_accuracy: 0.8878
Epoch 16/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1892 - accuracy: 0.9300 - val_loss: 0.3148 - val_accuracy: 0.8920
Epoch 17/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1790 - accuracy: 0.9329 - val_loss: 0.3118 - val_accuracy: 0.8932
Epoch 18/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1745 - accuracy: 0.9367 - val_loss: 0.3194 - val_accuracy: 0.8867
Epoch 19/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1664 - accuracy: 0.9387 - val_loss: 0.3190 - val_accuracy: 0.8890
Epoch 20/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1598 - accuracy: 0.9426 - val_loss: 0.3172 - val_accuracy: 0.8915
Epoch 21/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1546 - accuracy: 0.9454 - val_loss: 0.3313 - val_accuracy: 0.8872
Epoch 22/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1466 - accuracy: 0.9472 - val_loss: 0.3247 - val_accuracy: 0.8933
Epoch 23/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1469 - accuracy: 0.9468 - val_loss: 0.3320 - val_accuracy: 0.8883
Epoch 24/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1452 - accuracy: 0.9473 - val_loss: 0.3305 - val_accuracy: 0.8900
Epoch 25/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1398 - accuracy: 0.9495 - val_loss: 0.3294 - val_accuracy: 0.8932
Epoch 26/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1346 - accuracy: 0.9503 - val_loss: 0.3433 - val_accuracy: 0.8912
Epoch 27/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1329 - accuracy: 0.9518 - val_loss: 0.3366 - val_accuracy: 0.8908
Epoch 28/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1269 - accuracy: 0.9551 - val_loss: 0.3397 - val_accuracy: 0.8918
Epoch 29/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1253 - accuracy: 0.9550 - val_loss: 0.3293 - val_accuracy: 0.8940
Epoch 30/30
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1245 - accuracy: 0.9534 - val_loss: 0.3391 - val_accuracy: 0.8940
# Training with ADAM optimizer
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))   
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss = "sparse_categorical_crossentropy", optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999), metrics = ["accuracy"])
history = model.fit(x_train, y_train, epochs = 30, validation_split=0.1)

Epoch 1/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.6124 - accuracy: 0.7807 - val_loss: 0.3792 - val_accuracy: 0.8615
Epoch 2/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.3642 - accuracy: 0.8659 - val_loss: 0.3543 - val_accuracy: 0.8628
Epoch 3/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.3327 - accuracy: 0.8791 - val_loss: 0.3276 - val_accuracy: 0.8792
Epoch 4/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.3089 - accuracy: 0.8844 - val_loss: 0.3564 - val_accuracy: 0.8670
Epoch 5/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2884 - accuracy: 0.8925 - val_loss: 0.3261 - val_accuracy: 0.8803
Epoch 6/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2712 - accuracy: 0.8971 - val_loss: 0.3142 - val_accuracy: 0.8820
Epoch 7/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2545 - accuracy: 0.9051 - val_loss: 0.3261 - val_accuracy: 0.8835
Epoch 8/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2401 - accuracy: 0.9105 - val_loss: 0.3346 - val_accuracy: 0.8795
Epoch 9/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2257 - accuracy: 0.9143 - val_loss: 0.3504 - val_accuracy: 0.8812
Epoch 10/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2220 - accuracy: 0.9165 - val_loss: 0.3323 - val_accuracy: 0.8828
Epoch 11/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2094 - accuracy: 0.9190 - val_loss: 0.3486 - val_accuracy: 0.8843
Epoch 12/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.2081 - accuracy: 0.9207 - val_loss: 0.3349 - val_accuracy: 0.8837
Epoch 13/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1979 - accuracy: 0.9252 - val_loss: 0.3408 - val_accuracy: 0.8870
Epoch 14/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1917 - accuracy: 0.9271 - val_loss: 0.3516 - val_accuracy: 0.8937
Epoch 15/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1872 - accuracy: 0.9280 - val_loss: 0.3653 - val_accuracy: 0.8885
Epoch 16/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1761 - accuracy: 0.9339 - val_loss: 0.3623 - val_accuracy: 0.8857
Epoch 17/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1728 - accuracy: 0.9336 - val_loss: 0.3698 - val_accuracy: 0.8837
Epoch 18/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1686 - accuracy: 0.9352 - val_loss: 0.3885 - val_accuracy: 0.8905
Epoch 19/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1624 - accuracy: 0.9389 - val_loss: 0.3503 - val_accuracy: 0.8872
Epoch 20/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1553 - accuracy: 0.9412 - val_loss: 0.3751 - val_accuracy: 0.8930
Epoch 21/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1593 - accuracy: 0.9381 - val_loss: 0.4017 - val_accuracy: 0.8892
Epoch 22/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1477 - accuracy: 0.9428 - val_loss: 0.3928 - val_accuracy: 0.8893
Epoch 23/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1500 - accuracy: 0.9426 - val_loss: 0.4093 - val_accuracy: 0.8937
Epoch 24/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1399 - accuracy: 0.9468 - val_loss: 0.4752 - val_accuracy: 0.8863
Epoch 25/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1380 - accuracy: 0.9478 - val_loss: 0.4422 - val_accuracy: 0.8903
Epoch 26/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1376 - accuracy: 0.9477 - val_loss: 0.4426 - val_accuracy: 0.8930
Epoch 27/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1283 - accuracy: 0.9500 - val_loss: 0.4735 - val_accuracy: 0.8855
Epoch 28/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1277 - accuracy: 0.9516 - val_loss: 0.4315 - val_accuracy: 0.8887
Epoch 29/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1239 - accuracy: 0.9532 - val_loss: 0.4552 - val_accuracy: 0.8905
Epoch 30/30
1688/1688 [==============================] - 5s 3ms/step - loss: 0.1218 - accuracy: 0.9550 - val_loss: 0.4607 - val_accuracy: 0.8863

标签:loss,Fashion,val,30,Epoch,01,1688,MNIST,accuracy
来源: https://blog.csdn.net/weixin_47717959/article/details/114545242