用matlab实现神经网络识别数字
作者:互联网
分享一下我老师大神的人工智能教程。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow
Andrew Ng机器学习第四周的编程练习是用matlab实现一个神经网络对一幅图中的数字进行识别,有待识别的数字全集如下:
其中每一个数字都是一个大小为20*20像素的图像,如果把每个像素作为一个输入单元,那有400个输入。考虑到神经网络还需要增加一个额外输入单元表示偏差,一共有401个输入单元。题目中给的训练数据X是一个5000*400的向量。
题目中要求包含一个25个节点的隐藏层,隐藏层也存在表示偏差的额外输入,所以一共有26个输入。
最终的输出结果是一个10维的向量,分别表示该数字在0-9上面的概率值(由于没有0这个下标位,这里题目中把0标记为10,其余1-9还是对应1-9),找到其中概率最大的就是要识别的结果。
神经网络的结构如下:
从上图可以看到,神经网络中除了输入参数外,还包含Theta1和Theta2两个参数。
其中的Theta1就表示输入层到隐含层中每条边的权重,为25*401的向量。Theta2是隐含层到输出层每条边的权重,为10*26的向量。
为了把数据标准化减少误差,这里要对每一步的输出用sigmoid函数进行处理。
构造好神经网络后,首先是用训练数据进行训练,得出Theta1和Theta2的权重信息,然后就可以预测了。
主要的matlab代码如下:
%% Machine Learning Online Class - Exercise 3 | Part 2: Neural Networks% Instructions% ------------% % This file contains code that helps you get started on the% linear exercise. You will need to complete the following functions % in this exericse:%% lrCostFunction.m (logistic regression cost function)% oneVsAll.m% predictOneVsAll.m% predict.m%% For this exercise, you will not need to change any code in this file,% or any other files other than those mentioned above.%%% Initializationclear ; close all; clc%% Setup the parameters you will use for this exerciseinput_layer_size = 400; % 20x20 Input Images of Digitshidden_layer_size = 25; % 25 hidden unitsnum_labels = 10; % 10 labels, from 1 to 10 % (note that we have mapped "0" to label 10)%% =========== Part 1: Loading and Visualizing Data =============% We start the exercise by first loading and visualizing the dataset. % You will be working with a dataset that contains handwritten digits.%% Load Training Datafprintf('Loading and Visualizing Data ...\n')load('ex3data1.mat');m = size(X, 1);% Randomly select 100 data points to displaysel = randperm(size(X, 1));sel = sel(1:100);displayData(X(sel, :));fprintf('Program paused. Press enter to continue.\n');pause;%% ================ Part 2: Loading Pameters ================% In this part of the exercise, we load some pre-initialized % neural network parameters.fprintf('\nLoading Saved Neural Network Parameters ...\n')% Load the weights into variables Theta1 and Theta2load('ex3weights.mat');%% ================= Part 3: Implement Predict =================% After training the neural network, we would like to use it to predict% the labels. You will now implement the "predict" function to use the% neural network to predict the labels of the training set. This lets% you compute the training set accuracy.pred = predict(Theta1, Theta2, X);fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);fprintf('Program paused. Press enter to continue.\n');pause;% To give you an idea of the network's output, you can also run% through the examples one at the a time to see what it is predicting.% Randomly permute examplesrp = randperm(m);for i = 1:m % Display fprintf('\nDisplaying Example Image\n'); displayData(X(rp(i), :)); pred = predict(Theta1, Theta2, X(rp(i),:)); fprintf('\nNeural Network Prediction: %d (digit %d)\n', pred, mod(pred, 10)); % Pause fprintf('Program paused. Press enter to continue.\n'); pause;end
预测函数如下:
function p = predict(Theta1, Theta2, X)%PREDICT Predict the label of an input given a trained neural network% p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the% trained weights of a neural network (Theta1, Theta2)% Useful valuesm = size(X, 1);num_labels = size(Theta2, 1);% You need to return the following variables correctly p = zeros(size(X, 1), 1);% ====================== YOUR CODE HERE ======================% Instructions: Complete the following code to make predictions using% your learned neural network. You should set p to a % vector containing labels between 1 to num_labels.%% Hint: The max function might come in useful. In particular, the max% function can also return the index of the max element, for more% information see 'help max'. If your examples are in rows, then, you% can use max(A, [], 2) to obtain the max for each row.%X = [ones(m, 1) X];predictZ=X*Theta1';predictZ=sigmoid(predictZ);predictZ=[ones(m,1) predictZ];predictZZ=predictZ*Theta2';predictY=sigmoid(predictZZ);[mp,imp]=max(predictY,[],2);p=imp;% =========================================================================end
最终运算截图如下:
最后与回归分析做个对比:
回归分析需要对每个数字训练一个分类器,这里需要训练10个,每一个分类器迭代50次,结果为:
显然神经网络准确率要比回归分析高,同时也要显得简洁很多,代码行数也明显减少,这也正是神经网络的优势所在。
分享一下我老师大神的人工智能教程。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow
标签:Theta2,10,Theta1,神经网络,fprintf,matlab,识别,size 来源: https://blog.csdn.net/qq_43685315/article/details/87884601