其他分享
首页 > 其他分享> > Numpy 实现线性回归

Numpy 实现线性回归

作者:互联网

监督学习的步骤:

  1. 根据随机初始化的参数计算Loss function
  2. 根据当前的参数与Loss function给出一个梯度信息,根据梯度信息更新模型的参数值
  3. 不断循环前两个步骤得到最优的Loss值并得到最优参数

Loss

常见Loss函数

Classification常用

Gradient Descent梯度下降

梯度下降相关算法:

SGD
Momentum
NAG
Adagrad
Adadelta
Rmsprop

Learning Rate学习率

每一次更新参数利用多少误差,就需要通过一个参数来控制,这个参数就是学习率(Learning rate),也称为步长。将输出误差反向传播给网络参数,以此来拟合样本的输出。本质上是最优化的一个过程,逐步趋向于最优解。

Numpy

线性回归

step1 Compute Loss

y=wx+b
def compute_error_for_line_given_points(b,w,points):
  totalError = 0
  for i in range(0,len(points)):
      x = points[i,0] #取第i个点的第一个值
      y = points[i,1]  #取第i个点的第二个值
  totalError += (y - (w*x+b))**2
return totalError / float(len(points))

step2 Compute Gradient and update

def step_gradient(b_current,w_current,points,learningRate):
  b_gradient = 0
  w_gradient = 0
  N = float(len(points))
  for i in range(len(points)):
      x = points[i,0]
      y = pionts[i,1]
      b_gradient += (2/N) * ((w_current * x + b_current) - y)
      w_gradient += (2/N) * x * ((w_current * x + b_current) - y)
  # update w`
  new_b = b_current - (learningRate * b_gradient)
  new_w = w_current - (learningRate * w_gradient)
  return [new_b, new_w]

step3 w = w` and loop

def gradient_descent_runner(points,starting_b,starting_w,learning_rate,num_iterations):
    b = starting_b
    w = starting_w
    #update for several times
    for i in range(num_iterations) :
      b,w = step_gradient(b, w, np.array(points), learning_rate)
        

Run

def run():
  points = np.genfromtxt("data.csv",delimiter=",")
  learning_rate = 0.01
  initial_b = 0
  initial_w = 0
  num_iterations = 1000
  print("Starting gradient descent at b = {0},w = {1}, error = {2}"
    .format(initial_b,initial_W,
        compute_error_for_line_given_points(initial_b, initial_w,points))
        )
  print("Running...")
  [b ,w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
  print("After {0} iterations b = {1}, w = {2}, error = {3".
    format(num_iterations,b,w,
      compute_error_for_line_given_points(b, w, points))
      )
if __name__ == '__main__':
  run()

ALL Code

环境

import numpy as np

# y = wx + b
def compute_error_for_line_given_points(b, w, points):
    totalError = 0
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        # computer mean-squared-error
        totalError += (y - (w * x + b)) ** 2
    # average loss for each point
    return totalError / float(len(points))



def step_gradient(b_current, w_current, points, learningRate):
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        # grad_b = 2(wx+b-y)
        b_gradient += (2/N) * ((w_current * x + b_current) - y)
        # grad_w = 2(wx+b-y)*x
        w_gradient += (2/N) * x * ((w_current * x + b_current) - y)
    # update w'
    new_b = b_current - (learningRate * b_gradient)
    new_w = w_current - (learningRate * w_gradient)
    return [new_b, new_w]

def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
    b = starting_b
    w = starting_w
    # update for several times
    for i in range(num_iterations):
        b, w = step_gradient(b, w, np.array(points), learning_rate)
    return [b, w]


def run():
	
    points = np.genfromtxt("data.csv", delimiter=",")
    learning_rate = 0.00001
    initial_b = 0 # initial y-intercept guess
    initial_w = 0 # initial slope guess
    num_iterations = 100000
    print("Starting gradient descent at b = {0}, w = {1}, error = {2}"
          .format(initial_b, initial_w,
                  compute_error_for_line_given_points(initial_b, initial_w, points))
          )
    print("Running...")
    [b, w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
    print("After {0} iterations b = {1}, w = {2}, error = {3}".
          format(num_iterations, b, w,
                 compute_error_for_line_given_points(b, w, points))
          )

if __name__ == '__main__':
    run()

标签:gradient,回归,initial,current,points,iterations,error,线性,Numpy
来源: https://blog.csdn.net/luis_jie/article/details/113369316