gda: x as variance, P(y) as prior probility
![P(y|x)=P(x|y)*P(y)](https://latex.codecogs.com/gif.latex?P%28y%7Cx%29%3DP%28x%7Cy%29*P%28y%29)
高斯分布公式
![f(x)=\frac{1}{\sqrt{2\pi\delta }}*e^{^{\frac{-(x-u)^{2}}{2\delta ^{2}}}}](https://latex.codecogs.com/gif.latex?f%28x%29%3D%5Cfrac%7B1%7D%7B%5Csqrt%7B2%5Cpi%5Cdelta%20%7D%7D*e%5E%7B%5E%7B%5Cfrac%7B-%28x-u%29%5E%7B2%7D%7D%7B2%5Cdelta%20%5E%7B2%7D%7D%7D%7D)
![argmax(p_{0}^{*},\delta _{0}^{*},\upsilon _{0}^{*})=argmax(P(D))](https://latex.codecogs.com/gif.latex?argmax%28p_%7B0%7D%5E%7B*%7D%2C%5Cdelta%20_%7B0%7D%5E%7B*%7D%2C%5Cupsilon%20_%7B0%7D%5E%7B*%7D%29%3Dargmax%28P%28D%29%29)
![D={(x_{n},y_{n})}_{n=1}^{N}](https://latex.codecogs.com/gif.latex?D%3D%7B%28x_%7Bn%7D%2Cy_%7Bn%7D%29%7D_%7Bn%3D1%7D%5E%7BN%7D)
梯度下降公式牛顿公式
![w^{t+1}\leftarrow w^{t}-H^{t}*\varepsilon (w^{t})](https://latex.codecogs.com/gif.latex?w%5E%7Bt+1%7D%5Cleftarrow%20w%5E%7Bt%7D-H%5E%7Bt%7D*%5Cvarepsilon%20%28w%5E%7Bt%7D%29)
梯度下降公式
![w^{^{t+1}}\leftarrow w^{t}-\eta \sum (\delta (w^{t}x_{n})-y_{n})*x_{n}](https://latex.codecogs.com/gif.latex?w%5E%7B%5E%7Bt+1%7D%7D%5Cleftarrow%20w%5E%7Bt%7D-%5Ceta%20%5Csum%20%28%5Cdelta%20%28w%5E%7Bt%7Dx_%7Bn%7D%29-y_%7Bn%7D%29*x_%7Bn%7D)
logistic classification
input:
![x\epsilon R^{n}](https://latex.codecogs.com/gif.latex?x%5Cepsilon%20R%5E%7Bn%7D)
![y\varepsilon {\{0,1\}}](https://latex.codecogs.com/gif.latex?y%5Cvarepsilon%20%7B%5C%7B0%2C1%5C%7D%7D)
![P(y=1|x,b,w)=\delta (g(x))](https://latex.codecogs.com/gif.latex?P%28y%3D1%7Cx%2Cb%2Cw%29%3D%5Cdelta%20%28g%28x%29%29)
![\delta (a)=\frac{1}{e^{-a}+1}](https://latex.codecogs.com/gif.latex?%5Cdelta%20%28a%29%3D%5Cfrac%7B1%7D%7Be%5E%7B-a%7D+1%7D)
gda:基于类别的变量高斯分布,基于变量分布,带入label,进行P的输出判断
lr:基于整体的分布,进行P的输出判断
标签:基于,discriment,变量,公式,analyse,gaussian,gda,梯度,高斯分布
来源: https://blog.csdn.net/qq_34146899/article/details/120347162