[Pytorch系列-29]:神经网络基础 - 全连接浅层神经网络实现10分类手写数字识别
作者:互联网
作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客
本文网址:https://blog.csdn.net/HiWangWenBing/article/details/120607797
目录
前言 深度学习模型框架
第1章 业务领域分析
1.1 步骤1-1:业务领域分析
(1)业务需求
目标要求:
给定任意个手写数字图形,识别出其属于哪个数字?
(2)业务分析
本任务的本质是逻辑分类中的多分类,多分类中的10分类问题,即给定一张图形的特征数据(这里是单个图形的单通道像素值),能够判断其属于哪个数字分类。
针对本文案例,选用浅层的全连接网络即可,不需要深层神经网络。
1.2 步骤1-2:业务建模
(1)单层神经网络
输入:784
输出:10
(2)两层神经网络
第一层(隐藏层):748 * m + m个参数 ; 784 = 32 * 32
第二层(输出层):m * 10 + 10个参数
当m = 256时,共有参数:
第一层(隐藏层):191,744
第二层(输出层): 2,570
共 ~= 20万个参数
(3)三层神经网络
定义两个隐层的3层神经网络
L0(输入层):X: 784 = 32 * 32
L1(隐藏层1): L1 = X*W1 + B1 : 784 * 256 + 256 => 200,960
L2(隐藏层2): L2 = L1*W2 + B2 : 256 * 64 + 64 => 16,448
L3(输出层): L3 = L2*W3 + B3 : 64 *10 + 10 => 650
共约:22万个参数
(4)激活函数的选择
1.3 训练模型
1.4 验证模型
1.5 整体架构
1.6 代码实例前置条件
#环境准备
import numpy as np # numpy数组库
import math # 数学运算库
import matplotlib.pyplot as plt # 画图库
import torch # torch基础库
import torch.nn as nn # torch神经网络库
import torch.nn.functional as F # torch神经网络库
from sklearn.datasets import load_boston
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
print("Hello World")
print(torch.__version__)
print(torch.cuda.is_available())
Hello World 1.8.0 False
第2章 前向运算模型定义
2.1 步骤2-1:数据集选择
(1)MNIST数据集: http://yann.lecun.com/exdb/
备注 :可以先把样本数据下载本地,以提升程序调试的效率。最终的产品可以远程下载数据。
(2)样本数据与样本标签格式
(3)源代码示例 -- 下载并读入数据
#2-1 准备数据集
train_data = dataset.MNIST(root = "mnist",
train = True,
transform = transforms.ToTensor(),
download = True)
#2-1 准备数据集
test_data = dataset.MNIST(root = "mnist",
train = False,
transform = transforms.ToTensor(),
download = True)
print(train_data)
print("size=", len(train_data))
print("")
print(test_data)
print("size=", len(test_data))
Dataset MNIST Number of datapoints: 60000 Root location: mnist Split: Train StandardTransform Transform: ToTensor() size= 60000 Dataset MNIST Number of datapoints: 10000 Root location: mnist Split: Test StandardTransform Transform: ToTensor() size= 10000
2.2 步骤2-2:数据预处理
(1)原图不叠加噪声显示
#原图不叠加噪声
#获取一张图片数据
print("原始图片")
image, label = train_data[0]
print("torch image shape:", image.shape)
print("torch image label:", label)
print("\n单通道原始图片:numpy")
image = image.numpy().transpose(1,2,0)
print("numpy image shape:", image.shape)
print("numpy image label:", label)
print("\n不叠加噪声, 原图显示")
plt.imshow(image)
plt.show()
原始图片 torch image shape: torch.Size([1, 28, 28]) torch image label: 5 单通道原始图片:numpy numpy image shape: (28, 28, 1) numpy image label: 5 不叠加噪声, 原图显示
(2)原图叠加噪声
#原图叠加噪声
#获取一张图片数据
print("原始图片")
image, label = train_data[0]
print("torch image shape:", image.shape)
print("torch image label:", label)
print("\n单通道原始图片:numpy")
image = image.numpy().transpose(1,2,0)
print("numpy image shape:", image.shape)
print("numpy image label:", label)
print("\n叠加噪声, 平滑显示")
std = [0.5]
mean = [0.5]
image = image * std + mean
plt.imshow(image)
plt.show()
原始图片 torch image shape: torch.Size([1, 28, 28]) torch image label: 5 单通道原始图片:numpy numpy image shape: (28, 28, 1) numpy image label: 5 叠加噪声, 平滑显示
(3)#叠加噪声,灰度显示图片
#叠加噪声,灰度显示图片
print("原始图片")
image, label = train_data[0]
print("torch image shape:", image.shape)
print("torch image label:", label)
print("\n三通道灰度图片:torch")
image = utils.make_grid(image)
print("torch image shape:", image.shape)
print("torch image label:", label)
print("\n三通道灰度图片:numpy")
image = image.numpy().transpose(1,2,0)
print("numpy image shape:", image.shape)
print("numpy image label:", label)
print("\n叠加噪声, 平滑显示")
std = [0.5]
mean = [0.5]
image = image * std + mean
plt.imshow(image)
plt.show()
原始图片 torch image shape: torch.Size([1, 28, 28]) torch image label: 5 三通道灰度图片:torch torch image shape: torch.Size([3, 28, 28]) torch image label: 5 三通道灰度图片:numpy numpy image shape: (28, 28, 3) numpy image label: 5 叠加噪声, 平滑显示
(4)#不叠加噪声,黑白显示图片
#不叠加噪声,黑白显示图片
print("原始图片")
image, label = train_data[0]
print("torch image shape:", image.shape)
print("torch image label:", label)
print("\n三通道灰度图片:torch")
image = utils.make_grid(image)
print("torch image shape:", image.shape)
print("torch image label:", label)
print("\n三通道灰度图片:numpy")
image = image.numpy().transpose(1,2,0)
print("numpy image shape:", image.shape)
print("numpy image label:", label)
print("\n不叠加噪声,黑白显示")
plt.imshow(image)
plt.show()
print("numpy image shape:", image.shape)
原始图片 torch image shape: torch.Size([1, 28, 28]) torch image label: 5 三通道灰度图片:torch torch image shape: torch.Size([3, 28, 28]) torch image label: 5 三通道灰度图片:numpy numpy image shape: (28, 28, 3) numpy image label: 5 不叠加噪声,黑白显示
(5)批量数据读取
# 批量数据读取
train_loader = data_utils.DataLoader(dataset = train_data,
batch_size = 64,
shuffle = True)
test_loader = data_utils.DataLoader(dataset = test_data,
batch_size = 64,
shuffle = True)
print(train_loader)
print(test_loader)
print(len(train_loader), len(train_data)/64)
print(len(test_loader), len(test_data)/64)
<torch.utils.data.dataloader.DataLoader object at 0x000002461EF4A1C0> <torch.utils.data.dataloader.DataLoader object at 0x000002461ED66610> 938 937.5 157 156.25
(6)#显示一个batch图片
显示一个batch图片
print("获取一个batch组图片")
imgs, labels = next(iter(train_loader))
print(imgs.shape)
print(labels.shape)
print(labels.size()[0])
print("\n合并成一张三通道灰度图片")
images = utils.make_grid(imgs)
print(images.shape)
print(labels.shape)
print("\n转换成imshow格式")
images = images.numpy().transpose(1,2,0)
print(images.shape)
print(labels.shape)
print("\n显示样本标签")
#打印图片标签
for i in range(64):
print(labels[i], end=" ")
i += 1
#换行
if i%8 == 0:
print(end='\n')
print("\n显示图片")
plt.imshow(images)
plt.show()
获取一个batch组图片 torch.Size([64, 1, 28, 28]) torch.Size([64]) 64 合并成一张三通道灰度图片 torch.Size([3, 242, 242]) torch.Size([64]) 转换成imshow格式 (242, 242, 3) torch.Size([64]) 显示样本标签 tensor(0) tensor(8) tensor(3) tensor(7) tensor(5) tensor(7) tensor(9) tensor(7) tensor(1) tensor(1) tensor(1) tensor(8) tensor(8) tensor(6) tensor(0) tensor(1) tensor(4) tensor(8) tensor(1) tensor(3) tensor(3) tensor(6) tensor(4) tensor(4) tensor(0) tensor(5) tensor(8) tensor(5) tensor(9) tensor(3) tensor(7) tensor(5) tensor(2) tensor(1) tensor(0) tensor(6) tensor(8) tensor(8) tensor(9) tensor(6) tensor(1) tensor(3) tensor(5) tensor(3) tensor(4) tensor(4) tensor(3) tensor(1) tensor(4) tensor(1) tensor(4) tensor(4) tensor(9) tensor(8) tensor(7) tensor(2) tensor(3) tensor(1) tensor(2) tensor(0) tensor(8) tensor(1) tensor(1) tensor(4) 显示图片
2.3 步骤2-3:神经网络建模
(1)单层神经网络 + softmax
# 2-3 定义网络模型:单层神经网络
class NetA(torch.nn.Module):
# 定义神经网络
def __init__(self, n_feature,n_output):
super(NetA, self).__init__()
self.fc1 = nn.Linear(n_feature, n_output)
self.softmax = nn.Softmax(dim=1)
#定义前向运算
def forward(self, x):
# 得到的数据格式torch.Size([64, 1, 28, 28])需要转变为(64,784)
x = x.view(x.size()[0],-1) # -1表示自动匹配
fc1 = self.fc1(x)
out = self.softmax(fc1)
return out
model_a = NetA(28*28, 10)
print(model_a)
print(model_a.parameters)
print(model_a.parameters())
NetA( (fc1): Linear(in_features=784, out_features=10, bias=True) (softmax): Softmax(dim=1) ) <bound method Module.parameters of NetA( (fc1): Linear(in_features=784, out_features=10, bias=True) (softmax): Softmax(dim=1) )> <generator object Module.parameters at 0x000002461EDD8900>
(2)两层全连接神经网络
# 2-3 定义网络模型:两层全连接神经网络
class NetB(torch.nn.Module):
# 定义神经网络
def __init__(self, n_feature, n_hidden, n_output):
super(NetB, self).__init__()
self.fc1 = nn.Linear(n_feature, n_hidden)
self.fc2 = nn.Linear(n_hidden, n_output)
self.softmax = nn.Softmax(dim=1)
#定义前向运算
def forward(self, x):
# 得到的数据格式torch.Size([64, 1, 28, 28])需要转变为(64,784)
x = x.view(x.size()[0],-1) # -1表示自动匹配
fc1 = self.fc1(x)
fc2 = self.fc2(fc1)
out = self.softmax(fc2)
return out
model_b = NetB(28*28, 32, 10)
print(model_b)
print(model_b.parameters)
print(model_b.parameters())
NetB( (fc1): Linear(in_features=784, out_features=32, bias=True) (fc2): Linear(in_features=32, out_features=10, bias=True) (softmax): Softmax(dim=1) ) <bound method Module.parameters of NetB( (fc1): Linear(in_features=784, out_features=32, bias=True) (fc2): Linear(in_features=32, out_features=10, bias=True) (softmax): Softmax(dim=1) )> <generator object Module.parameters at 0x000002461EDD8190>
(3)带relu的两层全连接神经网络
# 2-3 定义网络模型:带relu的两层全连接神经网络
class NetC(torch.nn.Module):
# 定义神经网络
def __init__(self, n_feature, n_hidden, n_output):
super(NetC, self).__init__()
self.fc1 = nn.Linear(n_feature, n_hidden)
self.relu1 = torch.relu
self.fc2 = nn.Linear(n_hidden, n_output)
self.softmax = nn.Softmax(dim=1)
#定义前向运算
def forward(self, x):
# 得到的数据格式torch.Size([64, 1, 28, 28])需要转变为(64,784)
x = x.view(x.size()[0],-1) # -1表示自动匹配
fc1 = self.fc1(x)
a1 = self.relu1(fc1)
fc2 = self.fc2(a1)
out = self.softmax(fc2)
return out
model_c = NetC(28*28, 32, 10)
print(model_c)
print(model_c.parameters)
print(model_c.parameters())
NetC( (fc1): Linear(in_features=784, out_features=32, bias=True) (fc2): Linear(in_features=32, out_features=10, bias=True) (softmax): Softmax(dim=1) ) <bound method Module.parameters of NetC( (fc1): Linear(in_features=784, out_features=32, bias=True) (fc2): Linear(in_features=32, out_features=10, bias=True) (softmax): Softmax(dim=1) )> <generator object Module.parameters at 0x000002461F0570B0>
(4)卷积神经网络
# 2-3 定义网络模型:卷积神经网络
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.conv1 = nn.Conv2d(1,32,kernel_size=3,stride=1,padding=1)
self.pool = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(32,64,kernel_size=3,stride=1,padding=1)
self.fc1 = nn.Linear(64*7*7,1024)#两个池化,所以是7*7而不是14*14
self.fc2 = nn.Linear(1024,512)
self.fc3 = nn.Linear(512,10)
# self.dp = nn.Dropout(p=0.5)
def forward(self,x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 64 * 7* 7)#将数据平整为一维的
x = F.relu(self.fc1(x))
# x = self.fc3(x)
# self.dp(x)
x = F.relu(self.fc2(x))
x = self.fc3(x)
# x = F.log_softmax(x,dim=1) NLLLoss()才需要,交叉熵不需要
return x
model_d = CNN()
print(model_d)
print(model_d.parameters)
print(model_d.parameters())
CNN( (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fc1): Linear(in_features=3136, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (fc3): Linear(in_features=512, out_features=10, bias=True) ) <bound method Module.parameters of CNN( (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (fc1): Linear(in_features=3136, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=512, bias=True) (fc3): Linear(in_features=512, out_features=10, bias=True) )> <generator object Module.parameters at 0x000002461F0570B0>
2.4 步骤2-4:神经网络输出
# 2-4 定义网络预测输出
#y_pred = model.forward(x_train)
#print(y_pred.shape)
由于是批量数据输入,因此这里无法给出输出
第3章 后向运算模型定义
3.1 步骤3-1:定义loss函数
这里采用的MSE loss函数
# 3-1 定义loss函数:
# loss_fn= MSE loss
loss_fn = nn.MSELoss()
print(loss_fn)
MSELoss()
3.2 步骤3-2:定义优化器
# 3-2 定义优化器
model = model_a
Learning_rate = 0.01 #学习率
# optimizer = SGD: 基本梯度下降法
# parameters:指明要优化的参数列表
# lr:指明学习率
#optimizer = torch.optim.Adam(model.parameters(), lr = Learning_rate)
optimizer = torch.optim.SGD(model.parameters(), lr = Learning_rate,momentum=0.9)
print(optimizer)
SGD ( Parameter Group 0 dampening: 0 lr: 0.01 momentum: 0.9 nesterov: False weight_decay: 0 ) (1)选用参与训练的模型,这里选最简单的模型:单层多输入,多输出全连接神经网络
(2)设置动量:momentum=0.9
3.3 步骤3-3:模型训练
由于训练数据量较大,因此需要进行批量读取,每次读取样本数据的一个批次,批次的数量在设定批量读取器loader时设定好。
训练时,批量读取器依次读取数据,进行训练。后续的训练过程和结果,可能会影响前面已经训练的结果,因此需要多轮训练,在下面代码中,体现在epochs和两轮循环。
# 3-3 模型训练
# 定义迭代次数
epochs = 3
loss_history = [] #训练过程中的loss数据
accuracy_history =[] #中间的预测结果
accuracy_batch = 0.0
for i in range(0, epochs):
for j, (x_train, y_train) in enumerate(train_loader):
#(0) 复位优化器的梯度
optimizer.zero_grad()
#(1) 前向计算
y_pred = model(x_train)
#(2) 计算loss
loss = loss_fn(y_pred, y_train)
#(3) 反向求导
loss.backward()
#(4) 反向迭代
optimizer.step()
# 记录训练过程中的损失值
loss_history.append(loss.item()) #loss for a batch
# 记录训练过程中的准确率
number_batch = y_train.size()[0] # 图片的个数
_, predicted = torch.max(y_pred.data, dim = 1)
correct_batch = (predicted == y_train).sum().item() # 预测正确的数目
accuracy_batch = 100 * correct_batch/number_batch
accuracy_history.append(accuracy_batch)
if(j % 100 == 0):
print('epoch {} batch {} In {} loss = {:.4f} accuracy = {:.4f}%%'.format(i, j , len(train_data)/64, loss.item(), accuracy_batch))
print("\n迭代完成")
print("final loss =", loss.item())
print("final accu =", accuracy_batch)
poch 0 batch 0 In 937.5 loss = 2.3058 accuracy = 6.2500%% epoch 0 batch 100 In 937.5 loss = 2.0725 accuracy = 57.8125%% epoch 0 batch 200 In 937.5 loss = 1.9046 accuracy = 68.7500%% epoch 0 batch 300 In 937.5 loss = 1.7814 accuracy = 75.0000%% epoch 0 batch 400 In 937.5 loss = 1.7647 accuracy = 78.1250%% epoch 0 batch 500 In 937.5 loss = 1.7280 accuracy = 84.3750%% epoch 0 batch 600 In 937.5 loss = 1.7284 accuracy = 79.6875%% epoch 0 batch 700 In 937.5 loss = 1.7081 accuracy = 82.8125%% epoch 0 batch 800 In 937.5 loss = 1.6773 accuracy = 85.9375%% epoch 0 batch 900 In 937.5 loss = 1.6886 accuracy = 85.9375%% epoch 1 batch 0 In 937.5 loss = 1.6671 accuracy = 82.8125%% epoch 1 batch 100 In 937.5 loss = 1.6914 accuracy = 81.2500%% epoch 1 batch 200 In 937.5 loss = 1.7119 accuracy = 78.1250%% epoch 1 batch 300 In 937.5 loss = 1.6585 accuracy = 87.5000%% epoch 1 batch 400 In 937.5 loss = 1.6913 accuracy = 81.2500%% epoch 1 batch 500 In 937.5 loss = 1.6074 accuracy = 90.6250%% epoch 1 batch 600 In 937.5 loss = 1.6062 accuracy = 90.6250%% epoch 1 batch 700 In 937.5 loss = 1.6187 accuracy = 90.6250%% epoch 1 batch 800 In 937.5 loss = 1.6249 accuracy = 90.6250%% epoch 1 batch 900 In 937.5 loss = 1.6138 accuracy = 89.0625%% epoch 2 batch 0 In 937.5 loss = 1.6205 accuracy = 90.6250%% epoch 2 batch 100 In 937.5 loss = 1.5862 accuracy = 95.3125%% epoch 2 batch 200 In 937.5 loss = 1.6430 accuracy = 84.3750%% epoch 2 batch 300 In 937.5 loss = 1.5834 accuracy = 90.6250%% epoch 2 batch 400 In 937.5 loss = 1.5672 accuracy = 95.3125%% epoch 2 batch 500 In 937.5 loss = 1.5965 accuracy = 92.1875%% epoch 2 batch 600 In 937.5 loss = 1.6430 accuracy = 87.5000%% epoch 2 batch 700 In 937.5 loss = 1.5538 accuracy = 98.4375%% epoch 2 batch 800 In 937.5 loss = 1.5700 accuracy = 92.1875%% epoch 2 batch 900 In 937.5 loss = 1.6196 accuracy = 89.0625%% 迭代完成 final loss = 1.6274147033691406 final accu = 87.5
3.4 步骤3-4:模型可视化
(1)前向运算数据
(2)后向loss值迭代过程
#显示loss的历史数据
plt.grid()
plt.xlabel("iters")
plt.ylabel("")
plt.title("loss", fontsize = 12)
plt.plot(loss_history, "r")
plt.show()
(3)在线训练的精度
#显示准确率的历史数据
plt.grid()
plt.xlabel("iters")
plt.ylabel("%")
plt.title("accuracy", fontsize = 12)
plt.plot(accuracy_history, "b+")
plt.show()
3.5 步骤3-5:模型验证
(1)手工单批次验证
# 手工检查
index = 0
print("获取一个batch样本")
images, labels = next(iter(test_loader))
print(images.shape)
print(labels.shape)
print(labels)
print("\n对batch中所有样本进行预测")
outputs = model(images)
print(outputs.data.shape)
print("\n对batch中每个样本的预测结果,选择最可能的分类")
_, predicted = torch.max(outputs, 1)
print(predicted.data.shape)
print(predicted)
print("\n对batch中的所有结果进行比较")
bool_results = (predicted == labels)
print(bool_results.shape)
print(bool_results)
print("\n统计预测正确样本的个数和精度")
corrects = bool_results.sum().item()
accuracy = corrects/(len(bool_results))
print("corrects=", corrects)
print("accuracy=", accuracy)
print("\n样本index =", index)
print("标签值 :", labels[index]. item())
print("分类可能性:", outputs.data[index].numpy())
print("最大可能性:",predicted.data[index].item())
print("正确性 :",bool_results.data[index].item())
获取一个batch样本 torch.Size([64, 1, 28, 28]) torch.Size([64]) tensor([1, 2, 4, 3, 7, 7, 4, 0, 9, 1, 2, 0, 4, 3, 5, 2, 9, 3, 6, 3, 0, 1, 5, 5, 1, 5, 6, 8, 1, 9, 5, 0, 2, 3, 2, 4, 7, 4, 7, 9, 7, 5, 0, 2, 8, 8, 5, 9, 3, 6, 4, 9, 9, 3, 5, 1, 1, 2, 4, 0, 7, 5, 3, 7]) 对batch中所有样本进行预测 torch.Size([64, 10]) 对batch中每个样本的预测结果,选择最可能的分类 torch.Size([64]) tensor([1, 2, 4, 3, 7, 4, 4, 0, 4, 1, 2, 0, 4, 3, 4, 2, 9, 3, 3, 3, 0, 1, 5, 5, 1, 5, 6, 8, 1, 9, 5, 0, 2, 3, 2, 4, 7, 4, 7, 9, 7, 5, 0, 2, 8, 8, 9, 5, 3, 6, 4, 9, 9, 3, 5, 1, 1, 0, 4, 0, 7, 5, 3, 7]) 对batch中的所有结果进行比较 torch.Size([64]) tensor([ True, True, True, True, True, False, True, True, False, True, True, True, True, True, False, True, True, True, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, False, True, True, True, True, True, True]) 统计预测正确样本的个数和精度 corrects= 57 accuracy= 0.890625 样本index = 0 标签值 : 1 分类可能性: [7.4082727e-06 9.9425703e-01 2.3936583e-03 5.4639770e-04 1.2618493e-05 5.9957332e-05 3.4420833e-04 1.0612858e-04 2.1460787e-03 1.2669782e-04] 最大可能性: 1 正确性 : True
(2)训练集上自动验证
# 对模型进行评估,测试其在训练集上的准确率
correct_dataset = 0
total_dataset = 0
accuracy_dataset = 0.0
# 进行评测的时候网络不更新梯度
with torch.no_grad():
for i, data in enumerate(train_loader):
#获取一个batch样本"
images, labels = data
#对batch中所有样本进行预测
outputs = model(images)
#对batch中每个样本的预测结果,选择最可能的分类
_, predicted = torch.max(outputs.data, 1)
#对batch中的样本数进行累计
total_dataset += labels.size()[0]
#对batch中的所有结果进行比较"
bool_results = (predicted == labels)
#统计预测正确样本的个数
correct_dataset += bool_results.sum().item()
#统计预测正确样本的精度
accuracy_dataset = 100 * correct_dataset/total_dataset
if(i % 100 == 0):
print('batch {} In {} accuracy = {:.4f}'.format(i, len(train_data)/64, accuracy_dataset))
print('Final result with the model on the dataset, accuracy =', accuracy_dataset)
batch 0 In 937.5 accuracy = 90.6250 batch 100 In 937.5 accuracy = 90.0371 batch 200 In 937.5 accuracy = 89.8554 batch 300 In 937.5 accuracy = 89.4985 batch 400 In 937.5 accuracy = 89.2846 batch 500 In 937.5 accuracy = 89.2340 batch 600 In 937.5 accuracy = 89.1691 batch 700 In 937.5 accuracy = 89.1049 batch 800 In 937.5 accuracy = 89.2069 batch 900 In 937.5 accuracy = 89.2602 Final result with the model on the dataset, accuracy = 89.22
(3)测试集上验证
# 对模型进行评估,测试其在训练集上的准确率
correct_dataset = 0
total_dataset = 0
accuracy_dataset = 0.0
# 进行评测的时候网络不更新梯度
with torch.no_grad():
for i, data in enumerate(test_loader):
#获取一个batch样本"
images, labels = data
#对batch中所有样本进行预测
outputs = model(images)
#对batch中每个样本的预测结果,选择最可能的分类
_, predicted = torch.max(outputs.data, 1)
#对batch中的样本数进行累计
total_dataset += labels.size()[0]
#对batch中的所有结果进行比较"
bool_results = (predicted == labels)
#统计预测正确样本的个数
correct_dataset += bool_results.sum().item()
#统计预测正确样本的精度
accuracy_dataset = 100 * correct_dataset/total_dataset
if(i % 100 == 0):
print('batch {} In {} accuracy = {:.4f}'.format(i, len(test_data)/64, accuracy_dataset))
print('Final result with the model on the dataset, accuracy =', accuracy_dataset)
batch 0 In 156.25 accuracy = 89.0625 batch 100 In 156.25 accuracy = 90.5631 Final result with the model on the dataset, accuracy = 89.93
备注:
从上图可以看出,简单的全连接网络,准确率大概在90%。
如果需要更加准确的训练,需要采用卷积神经网络。
第4章 模型部署
4.1 步骤4-1:模型存储
#存储模型
torch.save(model, "models/boston_net.pkl")
#存储参数
torch.save(model.state_dict() , "models/boston_params.pkl")
4.2 步骤4-2:模型加载
作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客
本文网址:https://blog.csdn.net/HiWangWenBing/article/details/120607797
标签:10,image,torch,29,batch,神经网络,print,True,accuracy 来源: https://blog.csdn.net/HiWangWenBing/article/details/120607797