其他分享
首页 > 其他分享> > 2022秋week1,9月12日

2022秋week1,9月12日

作者:互联网


 


 

上海—小雨

2022秋week1

9月12日


 

计算机视觉:

why cv matters?

for safety health security comfort fun access and so on.

course contects:

    1. salieny detection
    2. segmentation
    3. object detection
    4. object recognition
    5. image recognition
    6. video processing

categories for ML

Transfer learning



深度学习:

lecture notes 01:

lecture logistic

intro to deep learning

machine learning review

Artificial neurons

General learning process

 

 deep learning is using the deep neural network as the map function.

 

The Inspiration of using neural network to solve problem came from visual cortex. PAPER LINK

Deep network is more compactly and learn more representation of input data.

 

Math review

 

Necessary condition: The derivative is zero

Sufficient condition: Hessian is positive definte     about Hessian matrix ,we can go to LINK and Taylor formula.

 

高斯分布

蒙特卡洛估计

最大似然估计 

 

 

Standard learning scenarios:

 include unsupervised learning; supervised learning; semi-supervised learning; reinforcement learning

Learning problem

Learning as iterative optimization

Supervised learning pipeline

 

       train dataset训练parameter   validation data 训练 hyper parameter

 

     Generalization

       Model selection for better generalization

        

 

    Questions

    生物神经元和计算神经元的比较

    Capacity of single neuron

    What a single neuron does?

      激活函数是怎么起作用的?

 

 

 

经验损失:

因为是通过已有数据得到的损失和风险。

理论上对于已有数据,已经在统计学上进行了反复的计算理解,这些计算出的风险和损失是通过先前对数据理解的经验获得的,因此叫经验损失。

 

随机梯度下降:

机器学习-损失函数(0-1损失函数、绝对损失函数、平方损失函数、对数损失函数)

点到直线的距离公式

 

 协方差矩阵

Analyzing the Hessian

随机梯度下降(SGD)与经典的梯度下降法的区别

Single-Layer Neural Networks and Gradient Descent

argmax和argmin

    arg min 就是使后面这个式子达到最小值时的变量的取值

    arg max 就是使后面这个式子达到最大值时的变量的取值

softmax

负对数似然(negative log-likelihood)

机器学习-正则化

铰链损失函数(Hinge Loss)的理解  LINK1    LINK2

林轩田《机器学习基石》:https://github.com/RedstoneWill/HsuanTienLin_MachineLearning/tree/master/Machine%20Learning%20Foundations/pdf%20files

感知器算法(Perceptron Algorithm)  证明

不失一般性的(WLOG)

 

 

lecture notes 02:

 

标签:12,函数,损失,supervised,2022,learning,week1,lecture,network
来源: https://www.cnblogs.com/duzetao/p/16686577.html