在Paddle中利用AlexNet测试CIFAR10数据集合
作者:互联网
简 介: 利用Paddle框架搭建了AlexNet网络,并在AI Studio上利用其至尊版本测试了AlexNet对于Cifar10的分类效果。 基础的训练在测试集合上的分类效果没有能够超过60%,这对于一些文章中提到的高达80% 的分类效果还有一定的距离。
关键词
: Cifar10,Alexnet
§01 AlexNet
1.1 背景介绍
在 2021年人工神经网络第四次作业要求 给出了NN课程中的第四次作业要求。关于Cifar10数据集合,在 2021年人工神经网络第四次作业 - 第三题Cifar10 中尝试使用BP,LeNet结构进行训练,在测试集合上的准确性始终无法突破30%。但是测试集合的精度很快就打到的饱和。
在其中简单修改了网络结构,调整学习速率以及使用Dropout层,对于结果影响不带。
参考博文 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(二) 中介绍的 AlexNet 的实现方法,在Paddle平台上完成该网络的搭建与测试。
1.2 原文代码
原文根据AlexNet的结构,结合 The CIFAR-10 dataset 图片的特点(32×32×3),对AlexNet网络结构进行了微调:
AlexNet的网络结构:
▲ 图1.2.1 AlexNet的网络结构
对CIFAR10,图片是3232,尺寸远小于227227,因此对网络结构和参数需做微调:
- 卷积层1:核大小7*7,步长2,填充2
- 最后一个max-pool层删除
1.2.1 网络代码
网络定义代码如下:
1 class AlexNet(nn.Module):
2 def __init__(self):
3 super(AlexNet, self).__init__()
4
5 self.cnn = nn.Sequential(
6 # 卷积层1,3通道输入,96个卷积核,核大小7*7,步长2,填充2
7 # 经过该层图像大小变为32-7+2*2 / 2 +1,15*15
8 # 经3*3最大池化,2步长,图像变为15-3 / 2 + 1, 7*7
9 nn.Conv2d(3, 96, 7, 2, 2),
10 nn.ReLU(inplace=True),
11 nn.MaxPool2d(3, 2, 0),
12
13 # 卷积层2,96输入通道,256个卷积核,核大小5*5,步长1,填充2
14 # 经过该层图像变为7-5+2*2 / 1 + 1,7*7
15 # 经3*3最大池化,2步长,图像变为7-3 / 2 + 1, 3*3
16 nn.Conv2d(96, 256, 5, 1, 2),
17 nn.ReLU(inplace=True),
18 nn.MaxPool2d(3, 2, 0),
19
20 # 卷积层3,256输入通道,384个卷积核,核大小3*3,步长1,填充1
21 # 经过该层图像变为3-3+2*1 / 1 + 1,3*3
22 nn.Conv2d(256, 384, 3, 1, 1),
23 nn.ReLU(inplace=True),
24
25 # 卷积层3,384输入通道,384个卷积核,核大小3*3,步长1,填充1
26 # 经过该层图像变为3-3+2*1 / 1 + 1,3*3
27 nn.Conv2d(384, 384, 3, 1, 1),
28 nn.ReLU(inplace=True),
29
30 # 卷积层3,384输入通道,256个卷积核,核大小3*3,步长1,填充1
31 # 经过该层图像变为3-3+2*1 / 1 + 1,3*3
32 nn.Conv2d(384, 256, 3, 1, 1),
33 nn.ReLU(inplace=True)
34 )
35
36 self.fc = nn.Sequential(
37 # 256个feature,每个feature 3*3
38 nn.Linear(256*3*3, 1024),
39 nn.ReLU(),
40 nn.Linear(1024, 512),
41 nn.ReLU(),
42 nn.Linear(512, 10)
43 )
44
45 def forward(self, x):
46 x = self.cnn(x)
47
48 # x.size()[0]: batch size
49 x = x.view(x.size()[0], -1)
50 x = self.fc(x)
51
52 return x
1.3 Paddle模型实现
利用Paddle中的神经网络模型构建Alexnet。
1.3.1 搭建Alexnet网络
(1)网络代码
import paddle
class alexnet(paddle.nn.Layer):
def __init__(self, ):
super(alexnet, self).__init__()
self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=96, kernel_size=7, stride=2, padding=2)
self.conv2 = paddle.nn.Conv2D(in_channels=96, out_channels=256, kernel_size=5, stride=1, padding=2)
self.conv3 = paddle.nn.Conv2D(in_channels=256, out_channels=384, kernel_size=3, stride=1, padding=1)
self.conv4 = paddle.nn.Conv2D(in_channels=384, out_channels=384, kernel_size=3, stride=1, padding=1)
self.conv5 = paddle.nn.Conv2D(in_channels=384, out_channels=256, kernel_size=3, stride=1, padding=1)
self.mp1 = paddle.nn.MaxPool2D(kernel_size=3, stride=2)
self.mp2 = paddle.nn.MaxPool2D(kernel_size=3, stride=2)
self.L1 = paddle.nn.Linear(in_features=256*3*3, out_features=1024)
self.L2 = paddle.nn.Linear(in_features=1024, out_features=512)
self.L3 = paddle.nn.Linear(in_features=512, out_features=10)
def forward(self, x):
x = self.conv1(x)
x = paddle.nn.functional.relu(x)
x = self.mp1(x)
x = self.conv2(x)
x = paddle.nn.functional.relu(x)
x = self.mp2(x)
x = self.conv3(x)
x = paddle.nn.functional.relu(x)
x = self.conv4(x)
x = paddle.nn.functional.relu(x)
x = self.conv5(x)
x = paddle.nn.functional.relu(x)
x = paddle.flatten(x, start_axis=1, stop_axis=-1)
x = self.L1(x)
x = paddle.nn.functional.relu(x)
x = self.L2(x)
x = paddle.nn.functional.relu(x)
x = self.L3(x)
return x
(2)网络结构
应用paddle.summary检查网络结构是否正确。
model = alexnet()
paddle.summary(model, (100,3,32,32))
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Conv2D-16 [[100, 3, 32, 32]] [100, 96, 15, 15] 14,208
MaxPool2D-7 [[100, 96, 15, 15]] [100, 96, 7, 7] 0
Conv2D-17 [[100, 96, 7, 7]] [100, 256, 7, 7] 614,656
MaxPool2D-8 [[100, 256, 7, 7]] [100, 256, 3, 3] 0
Conv2D-18 [[100, 256, 3, 3]] [100, 384, 3, 3] 885,120
Conv2D-19 [[100, 384, 3, 3]] [100, 384, 3, 3] 1,327,488
Conv2D-20 [[100, 384, 3, 3]] [100, 256, 3, 3] 884,992
Linear-10 [[100, 2304]] [100, 1024] 2,360,320
Linear-11 [[100, 1024]] [100, 512] 524,800
Linear-12 [[100, 512]] [100, 10] 5,130
===========================================================================
Total params: 6,616,714
Trainable params: 6,616,714
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 1.17
Forward/backward pass size (MB): 39.61
Params size (MB): 25.24
Estimated Total Size (MB): 66.02
---------------------------------------------------------------------------
{'total_params': 6616714, 'trainable_params': 6616714}
在网络设计过程中,往往会出现结构性差错的地方就在卷积层与全连接层之间出现,在进行Flatten(扁平化)之后,出现数据维度对不上。可以在网络定义的过程中,首先将Flatten之后的全连接层去掉,通过paddle.summary输出结构确认卷积层数出为 256×3×3之后,再将全连接层接上。如果出现差错,可以进行每一层校验。
1.4 Cifar10训练AlexNet
1.4.1 载入数据
import sys,os,math,time
import matplotlib.pyplot as plt
from numpy import *
import paddle
from paddle.vision.transforms import Normalize
normalize = Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5], data_format='HWC')
from paddle.vision.datasets import Cifar10
cifar10_train = Cifar10(mode='train', transform=normalize)
cifar10_test = Cifar10(mode='test', transform=normalize)
train_dataset = [cifar10_train.data[id][0].reshape(3,32,32) for id in range(len(cifar10_train.data))]
train_labels = [cifar10_train.data[id][1] for id in range(len(cifar10_train.data))]
class Dataset(paddle.io.Dataset):
def __init__(self, num_samples):
super(Dataset, self).__init__()
self.num_samples = num_samples
def __getitem__(self, index):
data = train_dataset[index]
label = train_labels[index]
return paddle.to_tensor(data,dtype='float32'), paddle.to_tensor(label,dtype='int64')
def __len__(self):
return self.num_samples
_dataset = Dataset(len(cifar10_train.data))
train_loader = paddle.io.DataLoader(_dataset, batch_size=100, shuffle=True)
1.4.2 构建网络
class alexnet(paddle.nn.Layer):
def __init__(self, ):
super(alexnet, self).__init__()
self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=96, kernel_size=7, stride=2, padding=2)
self.conv2 = paddle.nn.Conv2D(in_channels=96, out_channels=256, kernel_size=5, stride=1, padding=2)
self.conv3 = paddle.nn.Conv2D(in_channels=256, out_channels=384, kernel_size=3, stride=1, padding=1)
self.conv4 = paddle.nn.Conv2D(in_channels=384, out_channels=384, kernel_size=3, stride=1, padding=1)
self.conv5 = paddle.nn.Conv2D(in_channels=384, out_channels=256, kernel_size=3, stride=1, padding=1)
self.mp1 = paddle.nn.MaxPool2D(kernel_size=3, stride=2)
self.mp2 = paddle.nn.MaxPool2D(kernel_size=3, stride=2)
self.L1 = paddle.nn.Linear(in_features=256*3*3, out_features=1024)
self.L2 = paddle.nn.Linear(in_features=1024, out_features=512)
self.L3 = paddle.nn.Linear(in_features=512, out_features=10)
def forward(self, x):
x = self.conv1(x)
x = paddle.nn.functional.relu(x)
x = self.mp1(x)
x = self.conv2(x)
x = paddle.nn.functional.relu(x)
x = self.mp2(x)
x = self.conv3(x)
x = paddle.nn.functional.relu(x)
x = self.conv4(x)
x = paddle.nn.functional.relu(x)
x = self.conv5(x)
x = paddle.nn.functional.relu(x)
x = paddle.flatten(x, start_axis=1, stop_axis=-1)
x = self.L1(x)
x = paddle.nn.functional.relu(x)
x = self.L2(x)
x = paddle.nn.functional.relu(x)
x = self.L3(x)
return x
model = alexnet()
1.4.3 训练网络
test_dataset = [cifar10_test.data[id][0].reshape(3,32,32) for id in range(len(cifar10_test.data))]
test_label = [cifar10_test.data[id][1] for id in range(len(cifar10_test.data))]
test_input = paddle.to_tensor(test_dataset, dtype='float32')
test_l = paddle.to_tensor(array(test_label)[:,newaxis])
optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
def train(model):
model.train()
epochs = 2
accdim = []
lossdim = []
testaccdim = []
for epoch in range(epochs):
for batch, data in enumerate(train_loader()):
out = model(data[0])
loss = paddle.nn.functional.cross_entropy(out, data[1])
acc = paddle.metric.accuracy(out, data[1])
loss.backward()
optimizer.step()
optimizer.clear_grad()
accdim.append(acc.numpy())
lossdim.append(loss.numpy())
predict = model(test_input)
testacc = paddle.metric.accuracy(predict, test_l)
testaccdim.append(testacc.numpy())
if batch%10 == 0 and batch>0:
print('Epoch:{}, Batch: {}, Loss:{}, Accuracys:{}{}'.format(epoch, batch, loss.numpy(), acc.numpy(), testacc.numpy()))
plt.figure(figsize=(10, 6))
plt.plot(accdim, label='Accuracy')
plt.plot(testaccdim, label='Test')
plt.xlabel('Step')
plt.ylabel('Acc')
plt.grid(True)
plt.legend(loc='upper left')
plt.tight_layout()
train(model)
1.4.4 训练结果
-
训练参数:
-
BatchSize
:100
LearningRate
:0.001
如果BatchSize过小,训练速度变慢。
▲ 图1.4.1 训练精度和测试精度变化曲线
-
训练参数:
-
BatchSize
:5000
LearningRate
:0.0005
▲ 图1.4.2 训练精度和测试精度的变化
▲ 图1.4.3 训练精度和测试精度的变化
※ 总 结 ※
利用Paddle框架搭建了AlexNet网络,并在AI Studio上利用其至尊版本测试了AlexNet对于Cifar10的分类效果。 基础的训练在测试集合上的分类效果没有能够超过60%,这对于一些文章中提到的高达80% 的分类效果还有一定的距离。
■ 相关文献链接:
- 2021年人工神经网络第四次作业要求
- 2021年人工神经网络第四次作业 - 第三题Cifar10
- 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(二)
- AlexNet
- The CIFAR-10 dataset
● 相关图表链接:
标签:nn,CIFAR10,self,paddle,Paddle,100,256,AlexNet,size 来源: https://blog.csdn.net/zhuoqingjoking97298/article/details/122039418