其他分享
首页 > 其他分享> > [ 机器学习 - 吴恩达] Linear regression with one variable | 2-4 Gradient descent

[ 机器学习 - 吴恩达] Linear regression with one variable | 2-4 Gradient descent

作者:互联网

Have some function \(J(\theta_0,\theta_1)\)
Want \(\begin{matrix} min\\ \theta_0,\theta_1 \end{matrix}\) \(J(\theta_0,\theta_1)\)
Outline:

Gradient descent algorithm

repeat until convergence {
\(\theta_j := \theta_j - \alpha\frac{\partial}{\partial \theta_j}J(\theta_0,\theta_1)\)  \((for\ j = 0\ and\ j = 1\))
}

Correct: Simultaneous update
tmp0 \(:= \theta_0 - \alpha\frac{\partial}{\partial \theta_0}J(\theta_0,\theta_1)\)
tmp1 \(:= \theta_1 - \alpha\frac{\partial}{\partial \theta_1}J(\theta_0,\theta_1)\)
\(\theta_0 :=\) temp0
\(\theta_1 :=\) temp1

Incorrect:
tmp0 \(:= \theta_0 - \alpha\frac{\partial}{\partial \theta_0}J(\theta_0,\theta_1)\)
\(\theta_0 :=\) temp0
tmp1 \(:= \theta_1 - \alpha\frac{\partial}{\partial \theta_1}J(\theta_0,\theta_1)\)
\(\theta_1 :=\) temp1

标签:吴恩达,partial,Linear,Gradient,frac,alpha,theta,descent
来源: https://www.cnblogs.com/DeepRS/p/15586360.html