其他分享
首页 > 其他分享> > CS229:Learning Theory 1

CS229:Learning Theory 1

作者:互联网

Learning Theory

Assumption

  1. data in training set and test set are from the same distribution
  2. all samples are sampled independently
  3. Learning Algorithm is deterministic function, while output parameter is a random variable(sampling distribution), but there is a "true parameter" that is fixed but unknown, and we wish to get close.

Parameter view of fitting degree

we wish when we increase the number of samples, variance tends to be 0. And if \(\theta \rightarrow \theta^{*}\),this model can be called unbiased estimator.

Fight variance

  1. increase the number of samples
  2. regularization(may cause bias but can reduce variance significantly)

Fight high Bias

  1. make class \(\mathcal{H}\) bigger

ERM: Empirical Risk Minimizer

Finite hypothesis class

标签:varepsilon,right,Theory,CS229,Learning,error,hat,gamma,left
来源: https://www.cnblogs.com/Philematology/p/15867861.html