吴恩达机器学习代码及相关知识点总结--ex2(1.正则化逻辑回归)
作者:互联网
1.可视化
和第一部分作业类似:
path = 'code/ex2-logistic regression/ex2data2.txt'
data2 = pd.read_csv(path, header=None, names=['Test 1', 'Test 2', 'Accepted'])
data2.head()
positive = data2[data2['Accepted'].isin([1])]
negative = data2[data2['Accepted'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['Test 1'], positive['Test 2'], s=50, c='b', marker='o', label='Accepted')
ax.scatter(negative['Test 1'], negative['Test 2'], s=50, c='r', marker='x', label='Rejected')
ax.legend()
ax.set_xlabel('Test 1 Score')
ax.set_ylabel('Test 2 Score')
plt.show()
2.特征映射
degree = 5
x1 = data2['Test 1']
x2 = data2['Test 2']
data2.insert(3, 'Ones', 1)
for i in range(1, degree):
for j in range(0, i):
data2['F' + str(i) + str(j)] = np.power(x1, i-j) * np.power(x2, j)
data2.drop('Test 1', axis=1, inplace=True)
data2.drop('Test 2', axis=1, inplace=True)
data2.head()
3.regularized cost(正则化代价函数)
如果我们有非常多的特征,我们通过学习得到的假设可能能非常好的适应训练集,但对于新的数据集却有可能缺乏泛化能力从而造成过拟合的现象。以多项式来理解的话,x的次数越高,拟合的越好,但相应的预测能力就可能变差。一般有两种解决方法:
1.丢弃掉不能帮我们正确预测的特征
2.正则化.保留所有的特征,但是减少参数的大小。
当我们有非常多的特征,不知道要惩罚哪些的时候,我们对所有特征进行惩罚,这样我们就得到了上述公式。
正则化的效果如图所示:
def costreg(theta,X,y,learing_rate):
theta=np.matrix(theta)
X=np.matrix(X)
y=np.matrix(y)
z=np.dot(X,theta.T)
cost=(1/len(X))*np.sum(np.multiply(-y,np.log(sigmoid(z)))-np.multiply((1-y),np.log(1-sigmoid(z))))
reg=(learing_rate/(2*len(X)))*np.sum(np.power(theta[:,1:theta.shape[1]],2))
cost_reg=cost+reg
return cost_reg
4.正则化梯度下降
def gradientreg(theta,X,y,learing_rate):
theta=np.matrix(theta)
X=np.matrix(X)
y=np.matrix(y)
parameters=int(theta.flatten().shape[1])
grads=np.zeros(parameters)
error=sigmoid(np.dot(X,theta.T))
for i in range(parameters):
term=np.multiply(error,X[:,i])
if (i==0):
grads[i] = np.sum(term) / len(X)
else:
grads[i]=np.sum(term) / len(X)+(learing_rate/len(X))*theta[:,i]
return grads
cols2=data2.shape[1]
X2 = data2.iloc[:,1:cols]
y2 = data2.iloc[:,0:1]
X2 = np.array(X2.values)
y2 = np.array(y2.values)
theta2 = np.zeros(11)
learning_rate=1
print(costreg(theta,X2,y2,learning_rate))
print(gradientreg(theta,X2,y2,learning_rate))
计算优化后的结果:
result2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate))
result2
(array([ 1.22702519e-04, 7.19894617e-05, -3.74156201e-04,
-1.44256427e-04, 2.93165088e-05, -5.64160786e-05,
-1.02826485e-04, -2.83150432e-04, 6.47297947e-07,
-1.99697568e-04, -1.68479583e-05]), 96, 1)
theta_min = np.matrix(result2[0])
predictions = predict(theta_min, X2)
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y2)]
accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {0}%'.format(accuracy))
accuracy = 77%
5.scikit-learn的正则化逻辑回归
from sklearn import linear_model#调用sklearn的线性回归包
model = linear_model.LogisticRegression(penalty='l2', C=1.0)
model.fit(X2, y2.ravel())
这个准确度和我们刚刚实现的差了好多,不过请记住这个结果可以使用默认参数下计算的结果。我们可能需要做一些参数的调整来获得和我们之前结果相同的精确度。
标签:知识点,吴恩达,X2,Test,ex2,np,theta,y2,data2 来源: https://blog.csdn.net/qq_41462598/article/details/104536914