MLP(SGD or Adam) Perceptron Neural Network Working by Pytorch(including data preprocessing)
作者:互联网
通过MLP多层感知机神经网络训练模型,使之能够根据sonar的六十个特征成功预测物体是金属还是石头。由于是简单的linearr线性仿射层,所以网络模型的匹配度并不高。
这是我的第一篇随笔,就拿这个来练练手吧(O(∩_∩)O)。
相关文件可到github下载。本案例采用python编写。(Juypter notebook)
首先导入所需的工具包
1 import numpy as np 2 import pandas as pd 3 import matplotlib.pyplot as plt 4 import seaborn as sns 5 import torch 6 %matplotlib inline 7 8 plt.rcParams['figure.figsize'] = (4, 4) 9 plt.rcParams['figure.dpi'] = 150 10 plt.rcParams['lines.linewidth'] = 3 11 sns.set() 12 #初始化定义
相关工具包可到官网查看其功能。接下来进入数据的预处理部分。
传统的csv文件一般带有特征标志,例如下面的’tips.csv‘。
1 data = sns.load_dataset("tips") 2 data.head(5)
结果如下:
而现在要训练的数据是不带有total_bill,tip,sex这些特征标志的 。
所以要在read_csv的时候加入header=None用于默认创建一个索引。
origin_data = pd.read_csv('sonar.csv',header=None ) origin_data.head(5)
此时数据集建立完毕,结果如下:
0.0200 | 0.0371 | 0.0428 | 0.0207 | 0.0954 | 0.0986 | 0.1539 | 0.1601 | 0.3109 | 0.2111 | ... | 0.0027 | 0.0065 | 0.0159 | 0.0072 | 0.0167 | 0.0180 | 0.0084 | 0.0090 | 0.0032 | R | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.0453 | 0.0523 | 0.0843 | 0.0689 | 0.1183 | 0.2583 | 0.2156 | 0.3481 | 0.3337 | 0.2872 | ... | 0.0084 | 0.0089 | 0.0048 | 0.0094 | 0.0191 | 0.0140 | 0.0049 | 0.0052 | 0.0044 | R |
1 | 0.0262 | 0.0582 | 0.1099 | 0.1083 | 0.0974 | 0.2280 | 0.2431 | 0.3771 | 0.5598 | 0.6194 | ... | 0.0232 | 0.0166 | 0.0095 | 0.0180 | 0.0244 | 0.0316 | 0.0164 | 0.0095 | 0.0078 | R |
2 | 0.0100 | 0.0171 | 0.0623 | 0.0205 | 0.0205 | 0.0368 | 0.1098 | 0.1276 | 0.0598 | 0.1264 | ... | 0.0121 | 0.0036 | 0.0150 | 0.0085 | 0.0073 | 0.0050 | 0.0044 | 0.0040 | 0.0117 | R |
3 | 0.0762 | 0.0666 | 0.0481 | 0.0394 | 0.0590 | 0.0649 | 0.1209 | 0.2467 | 0.3564 | 0.4459 | ... | 0.0031 | 0.0054 | 0.0105 | 0.0110 | 0.0015 | 0.0072 | 0.0048 | 0.0107 | 0.0094 | R |
4 | 0.0286 | 0.0453 | 0.0277 | 0.0174 | 0.0384 | 0.0990 | 0.1201 | 0.1833 | 0.2105 | 0.3039 | ... | 0.0045 | 0.0014 | 0.0038 | 0.0013 | 0.0089 | 0.0057 | 0.0027 | 0.0051 | 0.0062 | R |
5 rows × 61 columns
该数据集有61列,其中最后一列应作为所要预测的数据。而观察最后一列可以看到数据为字符类型,而这在训练模
型时是不允许的,故将第六十一列提取并将字符R改为1,M改为0,即用1代表R,用0代表M,达到训练模型的要求。
代码如下:
y_data = origin_data.iloc[:,60] y_data.head(5)#分出需要预测的数据并检验
y_data.shape
调用y_data.shape查看共有多少个数据,以调用循环修改R、M。该数据集共有208个数据。代码如下:
Y=y_data.copy()#由于DataFrame复制会报警,故采用copy for i in range(208): if(y_data[i]=='R'): Y[i]=1 else: Y[i]=0 #将数据R转化为1,数据M转化为0
而后提取数据前六十列作为x数据集用于预测Y。在提取后,将x数据进行标准化处理(之前就是因为没有标准化而导致训练的模型loss曲线上下跌宕)。代码如下:
1 from sklearn.preprocessing import scale 2 x_data=origin_data.iloc[:,:-1] 3 x_data = scale(x_data)
而后将数据x_data,y_data分为训练集和测试集,分割比例为4:1(size=0.2)。将train,test集打包成dataset。这里为了减少GPU的负载,采用Mini-Batch分割数据,调用了dataloader自动将数据集分割成10个batch。
1 x_data=x_data 2 y_data=Y 3 x_data = np.array(x_data).reshape(208,60) 4 y_data = np.array(y_data).reshape(208,) 5 y_data = y_data.tolist()#重新转化为list形式方便split 6 x_data = x_data.tolist() 7 #split为train和test集合 8 from sklearn.model_selection import train_test_split 9 from sklearn.preprocessing import OneHotEncoder 10 #X_train,X_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.2) 11 X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.2) 12 from torch.utils.data import TensorDataset, DataLoader 13 train_dataset = TensorDataset(torch.Tensor(X_train), 14 torch.LongTensor(y_train)) 15 16 test_dataset = TensorDataset(torch.Tensor(X_test), 17 torch.LongTensor(y_test))#封装打包 18 TRAIN_SIZE = np.array(X_train).shape[0] 19 BATCH_SIZE = 10 20 NUM_EPOCH = 200 21 iters_per_epoch = TRAIN_SIZE // BATCH_SIZE 22 #采用mini——batch进行迭代,将训练数据分为10份,共迭代200次,共200*int(166/10)=3200次 23 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True) 24 test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True) 25 #打包成loader形式自动分割样本
MLP模型定义类代码如下:(应用了nn.sequential序列构建模型,采用了三层hidden_layer,且中间采用ReLu Function激活函数,最后采用输出在[0,1]之间的Softmax激活函数,模型较简单)。
1 from torch import nn#nn.sequiential() 2 class MLP(nn.Module): 3 4 def __init__(self, in_dim, hid_dim1, hid_dim2,hid_dim3, out_dim): 5 super(MLP, self).__init__() 6 self.layers = nn.Sequential( 7 nn.Linear(in_dim, hid_dim1), 8 nn.ReLU(), 9 nn.Linear(hid_dim1, hid_dim2), 10 nn.ReLU(), 11 nn.Linear(hid_dim2,hid_dim3), 12 nn.ReLU(), 13 nn.Linear(hid_dim3, out_dim), 14 nn.Softmax(dim=1)) 15 16 def forward(self, x): 17 y = self.layers(x) 18 return y
创建一个以SGD为优化器的迭代网络模型,代码如下:
1 net = MLP(in_dim=60, hid_dim1=300, hid_dim2=180,hid_dim3=60, out_dim=10) 2 criterion = nn.CrossEntropyLoss()#采用交叉熵进行loss反馈 3 from torch import optim 4 optimizer = optim.SGD(params=net.parameters(), lr=0.1)#学习率0.1,SGD随机梯度下降优化器 5 optimizer.zero_grad()# 每次优化前都要清空梯度,这里先清空防止意外发生
1 #SGD迭代 2 train_loss_history = [] 3 test_acc_history = [] 4 5 for epoch in range(NUM_EPOCH): 6 7 for i, data in enumerate(train_loader): 8 9 inputs, labels = data 10 11 optimizer.zero_grad() 12 outputs = net(inputs) 13 14 loss = criterion(outputs, labels) 15 loss.backward() 16 17 optimizer.step() 18 19 train_loss = loss.tolist() 20 train_loss_history.append(train_loss) 21 22 if (i+1) % iters_per_epoch == 0: 23 print("[{}, {}] Loss: {}".format(epoch+1, i+1, train_loss)) 24 25 total = 0 26 correct = 0 27 for data in test_loader: 28 inputs, labels = data 29 outputs = net(inputs) 30 _, preds = torch.max(outputs.data, 1) 31 32 total += labels.size(0) 33 correct += (preds == labels).sum() 34 35 print("Accuracy: {:.2f}%".format(100.0 * correct / total))
用loss_history列表record了所有的loss数据,此时调用matlab.pyplot包画出loss曲线图
1 import matplotlib.pyplot as plt 2 plt.plot(train_loss_history)
输出如下:
[<matplotlib.lines.Line2D at 0x25be01fcdf0>]若采用Adam优化器,则代码与结果如下:
1 from torch import optim 2 net = MLP(in_dim=60, hid_dim1=540, hid_dim2=180,hid_dim3=30, out_dim=10)#调整了隐藏层参数 3 optimizer = optim.Adam(params=net.parameters(), lr=0.001)#更换为Adam优化器 4 criterion = nn.CrossEntropyLoss() 5 6 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True) 7 test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True) 8 train_loss_history = [] 9 test_acc_history = [] 10 #Adam优化器迭代 11 for epoch in range(NUM_EPOCH): 12 13 for i, data in enumerate(train_loader): 14 15 inputs, labels = data 16 17 optimizer.zero_grad() 18 outputs = net(inputs) 19 20 loss = criterion(outputs, labels) 21 loss.backward() 22 23 optimizer.step() 24 25 train_loss = loss.tolist() 26 train_loss_history.append(train_loss) 27 28 if (i+1) % iters_per_epoch == 0: 29 print("[{}, {}] Loss: {}".format(epoch+1, i+1, train_loss)) 30 31 total = 0 32 correct = 0 33 for data in test_loader: 34 inputs, labels = data 35 outputs = net(inputs) 36 _, preds = torch.max(outputs.data, 1) 37 38 total += labels.size(0) 39 correct += (preds == labels).sum() 40 41 print("Accuracy: {:.2f}%".format(100.0 * correct / total))
1 import matplotlib.pyplot as plt 2 plt.plot(train_loss_history)
[<matplotlib.lines.Line2D at 0x25be08b49d0>]模型训练完毕后,可通过将所有数据导入模型训练得出Confusion Matrix以查看性能指标,根据自己的实际需求调整模型以达到更优化的性能。 这里仅贴上画Adam模型的Matrix的代码。中间过程请仿照上述代码自行拟定。
1 #画confusion_matrix 2 from sklearn.metrics import confusion_matrix 3 cm = confusion_matrix(y_data, total_down) 4 sns.heatmap(cm, annot=True, fmt = "d", cmap = "Blues", annot_kws={"size": 20}, cbar = False) 5 plt.ylabel('True') 6 plt.xlabel('Predicted') 7 sns.set(font_scale = 2)
Matrix如下:
通过简单计算得到Precision,Sensitivity,Accuracy,Specificity性能指标
1 TP=77 2 FN=34 3 FP=45 4 TN=52 5 Accuracy= (TP+TN)/(TP+TN+FP+FN) 6 Precison = TP/(TP+FP) 7 Sensitivity = TP/(TP+FN) 8 Specificity = TN/(TN+FP) 9 print("Accuracy is:{} Precision is:{} Sensitivity is:{} Specificity is:{}".format(Accuracy,Precison,Sensitivity,Specificity)) 10 #计算评估指标
输出如下:
Accuracy is:0.6201923076923077 Precision is:0.6311475409836066 Sensitivity is:0.6936936936936937 Specificity is:0.5360824742268041
本模型采用IPython编写,如用Pycharm等请自行删除一些代码。
标签:loss,Network,Working,Neural,train,hid,test,import,data 来源: https://www.cnblogs.com/alexgzh/p/15057302.html