编程语言
首页 > 编程语言> > python – 线性回归实现总是比sklearn更糟糕

python – 线性回归实现总是比sklearn更糟糕

作者:互联网

我在python中实现了梯度下降的线性回归.为了了解它的表现如何,我将它与scikit-learn的LinearRegression()类进行了比较.出于某种原因,sklearn总是比MSE平均表现优于我的程序(我使用Boston Housing数据集进行测试).我知道我目前没有进行梯度检查以检查收敛,但我允许进行多次迭代并将学习率设置得足够低,以便它应该收敛.我的学习算法实现中有没有明显的错误?这是我的代码:

import numpy as np
from sklearn.linear_model import LinearRegression

def getWeights(x):
    lenWeights = len(x[1,:]);
    weights = np.random.rand(lenWeights)
    bias = np.random.random();
    return weights,bias

def train(x,y,weights,bias,maxIter):
    converged = False;
    iterations = 1;
    m = len(x);
    alpha = 0.001;
    while not converged:
            for i in range(len(x)):
                # Dot product of weights and training sample
                hypothesis = np.dot(x[i,:], weights) + bias;
                # Calculate gradient
                error = hypothesis - y[i];
                grad = (alpha * 1/m) * ( error * x[i,:] );
                # Update weights and bias
                weights = weights - grad;
                bias = bias - alpha * error;
                iterations = iterations + 1;

                if iterations > maxIter:
                    converged = True;
                    break

    return weights, bias

def predict(x, weights, bias):
    return np.dot(x,weights) + bias

if __name__ == '__main__':

    data = np.loadtxt('housing.txt');
    x = data[:,:-1];
    y = data[:,-1];
    for i in range(len(x[1,:])):
        x[:,i] = ( (x[:,i] - np.min(x[:,i])) / (np.max(x[:,i]) - np.min(x[:,i])) );

    initialWeights,initialBias = getWeights(x);
    weights,bias = train(x,y,initialWeights,initialBias,55000);
    pred = predict(x, weights,bias);
    MSE = np.mean(abs(pred - y));

    print "This Program MSE: " + str(MSE)

    sklearnModel = LinearRegression();
    sklearnModel = sklearnModel.fit(x,y);
    sklearnModel = sklearnModel.predict(x);

    skMSE = np.mean(abs(sklearnModel - y));

    print "Sklearn MSE: " + str(skMSE)

解决方法:

首先,确保您正在计算正确的目标函数值.线性回归目标应该是.5 * np.mean((pred-y)** 2),而不是np.mean(abs(pred – y)).

您实际上正在运行随机梯度下降(SGD)算法(对各个示例运行梯度迭代),这应该与“梯度下降”区分开来.

SGD是一种很好的学习方法,但是一种糟糕的优化方法 – 可能需要多次迭代才能收敛到经验误差的最小值(http://leon.bottou.org/publications/pdf/nips-2007.pdf).

要使SGD收敛,必须限制学习率.通常,学习速率设置为基本学习速率除以迭代次数,如alpha /(迭代1),使用代码中的变量.

您还在渐变中包含1 / m的倍数,这通常不用于SGD更新.

要测试您的SGD实现,而不是评估您使用的数据集上的错误,请将数据集拆分为训练集和测试集,并在使用这两种方法进行训练后评估此测试集上的错误.训练/测试集拆分将允许您将算法的性能估计为学习算法(估计预期误差)而不是作为优化算法(最小化经验误差).

标签:python,machine-learning,scikit-learn,linear-regression
来源: https://codeday.me/bug/20190612/1226365.html